We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- This video provides a high-quality technical breakdown of Go 1.26 syntax changes and performance improvements from actual contributors and educators.
Be Aware
Cautionary elements
- The host's 'pretty face' / 'know nothing' persona is a calculated framing device to make the corporate environment of JetBrains feel like a community-led 'party'.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
🎬 Download and Play YouTube Clips with bash/Go
RWXROB
🎬 Download and Play YouTube Clips with bash/Go
RWXROB
How To Trade Orderflow On MMT - LIVE DEV and Community Stream
Anthony GG
Java's Plans for 2026 - Inside Java Newscast #104
Java
Are Programming Languages About to Disappear?
Stefan Mischook
Transcript
We're live. Hello everybody. >> Hello. Hello. >> Welcome >> to the Go 1.26 live stream. We're here to celebrate the release of the new langu or like the new version of the language. I'm super excited, but I have to tell you all a secret before we get really into it because and it's not really a secret. You'll figure it out sooner than later. I know exactly one thing about Go and that's how it's spelled. So, I know literally nothing. Literally nothing. But this is not a problem. We will figure this out together. I'm super excited to be here and learn a little bit about Go. I have two experts with me that will carry me along the way. Basically, a little bit like that weight, but we get there. I promise. So, because I don't know anything about Go, I would be super curious if you could put in the chat, what's your favorite feature about the Go 1.26 version just to help me get up to speed. And while we do that, I would love to introduce my I I wanted to say guest, but that doesn't seem quite right because you're really the main attraction to this. And again, I'm just a pretty face doing the moderation here today. So, let's start with uh L comes before N. So, let's start with Alex. Alex, would you mind introducing yourself real quick? >> Oh, for sure. Hi, I'm Alex Hughes. I'm from Brazil and Brazil mentioned. So, uh I'm working with a software engineer in Brazil. I navigate in leadership and I love community. So I'm participating of these events, some conferences and workshops, uh, writing books about Go and other things like Zigg, it's not for today. It's just Go. And I'm happy to be here to celebrate the new version of Go. >> Awesome. Thank you so much for joining us and thank you so much for helping me along this way. Anton, what about you? Can you uh introduce yourself for everyone in the audience? >> Sure. Sure. Hello folks. My name is Anton. I uh work with Go both uh my job and also uh in my free time I kind of prepare interactive tours for Go releases for the past few years. So that was pretty exciting and that's why I'm very excited to be here today to talk to you about the new Go 126 release. >> Well, that's exciting. >> So, because I asked the audience a question, it would be mean not to ask you guys the same question. What is your favorite feature in Go? I I always need a mental break to get the number right. 1.26 or 1.26, whatever you say. What's your favorite feature? >> Probably type checking for errors for me. >> So, so Go has a type as error con or error as type concept now. Is that >> Yeah. Fair statement? It does. Yeah, we'll see shortly. >> I would love JavaScript to have to say more TypeScript, but yeah, >> maybe one day maybe. >> So for me, um the most exciting features uh are maybe four features. Two are experimental. I I really love the idea that we now have a a gorotin leak detection. It's experimental but it's coming in this version. Uh also we added the cmd capability in go. It's a um a thing that everyone is hoping for in terms of perform. But to not to be fair with stable features not just in mentioning experimental I really like the idea that I owe it all is double the performance from thin air. Um that I [clears throat] have one in mind is oh okay now we can know the cause of a signal uh who cancelled the signal so it's very important for for troubleshooting and debugging so it's my main preferable a preferable >> features of this version >> based on Anton's reaction that seems like a very big deal so I'll I'll just roll with it It sounds super interesting and I hope to learn more about it. I would because I'm peak prepared and absolutely did not have marketing prepare some slides for me. We have some slides that I would like to get up just to go over some housekeeping things. So um well once again welcome everybody to the go how before I say it like 15 times wrong do you say go 1.26 or 1.26? I think we can go with 26. What I think What you think, Anthony? 26. It's better. >> I agree. >> 26. Okay. I need a chat here to let me know. Um I I've I I develop a little bit of trust issue. You're just trying to let me run into the open knife here, I feel. >> So, um Chad, please let me know if they are lying to me. All right. Go 1.26. Welcome to the release party. Super excited to celebrate this. And if you have any questions, this is intentionally live event. Just put them in the chat. I will try my best to fit like the best breaking point while we have our fantastic speakers do their presentations and try to slip in the questions as good as possible. Uh so just put them in the chat and we'll get to it either very quickly or at a block afterwards. I already asked that question because I'm so prepared. The other thing that is usually asked during this live these kind of live streams this is on YouTube. This is the internet. The internet does not forget. So this live stream will be out on YouTube right after the event. So if you have to grab a coffee, if you have to get some work stuff in, come back later, watch the video. It will be right here right afterwards. And the last thing is we have a little giveaway prepared or like an extended Goland trial. So if you ever were curious, oh what is this go Goland thing about? This is your chance. Uh scan the QR code, use the code we prepared, go release party. I think uh someone in the background will also put it in the chat so that you're have it available there too. check it out if because I'm so on top of things. The I know that the Goland team did move heaven on earth to basically have support ready for Goland 1.26 already in the latest version. So check it out. It's super cool. And this was my last housekeeping things. Um let me check my agenda if I forgot anything. I don't think so. I don't think so. But I do have a little small uh question and um in the background we can leave the slide open in case someone needs to scan it or something. But I have another question before we come to Anton's talk. Was there anything in the new release that A surprised you or B was completely missing and you were like how did they not add this? Well, for me it was the size of the release. So, I was uh making another one of the interactive tours this time for go 126 and I was surprised really how huge it is. It seems like uh for me it's the biggest release I have seen. So, just the sheer number of features that the team uh has packed into the release is astonishing. Oh, I I don't missing anything because I'm very satisfied with Google since maybe 16 the the the next 10 versions made uh bring to us good stuff but the language for me is already a long time ago. The performance gain is good but in in terms of language features I don't feel that we need much more than we have right now. All right. Um, I might ask this question at the end. So, I I'll I'll leave this for now. Build up a little bit momentum, you know. So, Anton, you have something prepared. I cannot wait to see what you got. So, why don't you take it away with your five key features? >> Sure. Sure. So, uh, I have already introduced myself. So now I'll try to introduce the release to you. Um there are really a lot of features to see and we'll start with four features now but later we'll see some more from me and from Alex. So let's begin. So the first thing is uh the new with expressions. You probably all know what uh is the new built-in. It is used to create to allocate a variable of a specific type. So here in this example, we create u we allocate an integer variable and then we assign value of 42 to it. Pretty simple basic stuff and it works pretty well, but it's kind of uh you know wordy because uh we have to first allocate and then assign. It would be much better if we could just uh do the location and the assignment in one set and that's exactly what we can do with go 126. So now uh the new builtin accepts uh any expressions. So we can pass a specific value or we can pass a strct or we can uh pass a function call the result of the function call and the go runtime will uh allocate a variable assign a value and return a pointer to this value. So it may seem like not a big deal but there is one specific scenario when it can be really useful and that's when you work with uh strcts which maybe configuration options or any data that you serialize or d serialize to JSON or protobuff. So in this kind of strcts you often have fields which are optional. So which can take either Neil or a specific value. As in our example we have a a cat strct with a fed field which can be either true, false or new. So previously if we uh wanted to assign a specific value to the fed field we would have to declare a variable and then assign a value and then pass it to the strct initializer. But now we can just uh call new with a specific value and uh that's all. I know that in many projects uh there are helper functions like named something like ptr which basically accept a value and return a pointer to this value uh specifically created for this use case. So now we can just uh you know delete this helper functions and use the new built-in. It's a pretty nice addition if you work with uh this kind of strcts. Okay, moving on. So the next thing and this one you'll probably see the most I think in the regular code. Uh this is related to error checking. You're probably all familiar with the errors s function which allows you to check if a specific error value error in this example matches a certain type. In this case the type is the pointer to apper. So how we work with it? We first uh declare a target variable and then we call errors s with a specific value also we pass the target variable and uh then we have a boolean result and also if the result is true meaning that error is in fact of type error app error we'll also see that the target variable is filled with a specific uh error type. So again, you can probably tell from uh how how wordy this description was that it is also wordy in code. So uh it would be better if we can uh scope everything to the if branch and if we do not have to declare the variable in advance and that my friends is exactly what we can do now using the new errors type function. So as you can see this is a generic function as uh as a generic parameter it takes a specific error type and as a regular parameter it takes the error value. So we can we can here immediately see um if the error matches this type and also we get the specific error value in return. It may not look like a big deal, but it is when you start checking for multiple errors. So like in this case, we have uh an error and we want to check if it's uh an operation error or if it's a DNS error or if it's some kind of other error. And uh here you can see that we we can keep all the variables scoped to the if branches. So the connection error is in the connection error branch and the DMS error in the DNS branch. And uh the whole construction starts to look kind of like a switch which is really nice to have for errors because you you of course know that we can't really use uh regular switch with errors because uh uh they can be wrapped and so on. But here thanks to errors type we can now have this if else else etc etc work kind of like poor man's switch on the error type. Another nice thing about the RSS type is uh since it's generic it's uh much more type safe than the old errors. It can check at compile time if the type that you have passed is in fact uh an error is it if it's implement if it implements uh the error interface or if it's not. Previously with RSS you would only see the error at runtime but now the code would even not compile if something is wrong which is really convenient. You don't have to you know wait until your application crash. You can just see immediately that like in this example that the uh type that we have passed it doesn't satisfy error. So we need to fix the code and also thanks to this new implementation uh the air uh the function it it's now faster. It now allocates less. It does not use reflect package at all. And uh since it covers pretty much all use cases as the old errors error s function, the go team recommends that we from now on use errors as type instead of errors as so you can safely forget about the old one and just use errors as type all the time. Also, we'll see later another feature, the new renewed go fix command. You can just uh use it to automatically update, modernize your code, including changing from RSS to RSS type, which is really nice. Okay, maybe there are some questions before we move on to the garbage collection. As of now, I don't see any question in the chat. Um, if there's anyone any qu anyone having any question, fire away. Now's a great time. Also on your statement with forgetting about errors as already forgot. Easy. That was piece of cake. >> Nice. Nice. The new release is already working very well. >> Yep. I like cutting edge right now. [laughter] >> Great. Great. Absolutely awesome. Okay, let's move on. >> I don't see any question. So, go away. Go go go go go away. Go ahead. [laughter] >> Oh, I I will I will. Okay. So, let's talk now about the garbage collection. Uh you know that in Go unlike in other languages like Java, we don't have many garbage collectors. We have a single one which works really well. It doesn't have a lot of configuration options. We don't need to tweak it and uh think about it at all. It just works most of the time. It works great and we can focus on writing the application code which is our job. Uh but it turns out uh that while the old garbage collection was working pretty well, it had some drawbacks. So go garbage collector is a kind of mark and sweep garbage collectors which means that first it needs to go over the heap and mark all objects in the heap uh to see if they are reachable or unreachable and after that the garbage collector could free all the unreachable objects. So the objects in the heap of course are located in different parts of memory as uh in this picture they can be in different regions and uh the old garbage collector what it did is that it it started with a certain object for for of course I'm simplifi simplifying things but uh let's say it started with a single object A. It saw that this object references another object B. it jumped to object B. From there it jumped to object C, etc., etc. So the old garbage collector was jumping all over the heap memory. Um, and um, another thing here is that for CPU to work with memory, it cannot directly work with main memory. it should first load a chunk of memory into the CPU cache and only then it can process it. So it turns out since the objects can be located in very different areas of memory, this uh jumping led to CPU mostly waiting for the memory to arrive from main memory to the CPU cache. The go team found that uh the CPU can spend up to 35% of all the garbage collection time just waiting for the memory to arrive which is as you probably can say is not very optimal. So it probably would be much better if we can find a way for the garbage collector to scan objects in a linear fashion so that the CPU would load the chunk with multiple objects and so it uh so that it can process them all without waiting for memory to arrive from a different part of the heap. And an interesting thing is that go already has something very well suited for that. And uh this is in fact uh go runtime allocator. So go allocator doesn't just uh allocate memory from random parts of the heap. you know uh it uses structures called spans which are linear blocks of memory each of 8 kilobytes and they are split into equal slots according to the object size. For example, here we have a memory slot of size 32 which means each slot in this span is exactly 32 bytes. So if the allocator works kind of in a linear fashion putting objects into slots in this pans why don't we make the garbage collector also use these spans and that is exactly what the team the go team did. So the new garbage collector which is called green tea works with spans and since all the objects in a span are located next to each other. Uh the CPU can effectively use its cache. So when it loads a chunk of memory for object A in this example it will also automatically load objects B, C and D because they're all adjacent to each other. So, so with this design, the garbage collector uses CPU much more efficiently. There are less cache misses and the CPU spends more time actually scanning the memory and not waiting for it to arrive in the cache. Now, uh a word of caution here. I would probably not expect for an average application to see, you know, some great improvements with this new garbage collector because the old one, frankly speaking, was already fine. And if your application is not constrained by the huge number of allocations, you will probably not see um large improvements, maybe just something more marginal. But on the other hand, if you allocate really a lot of small objects and this new garbage collector is optimized specifically for small objects, you may see according to some reports something about like a 5% reduction in the CPU usage overall which is pretty nice given that this is the you know given for us for free. We don't need to do anything. Just upgrade to the go 126 and go with the new garbage collector. It is uh available and enabled by default. You can temporarily switch back to the old one, but that option will be removed in future releases. So going forward, we will have still a single garbage collector. It will be green tea. Okay, moving forward. This is the the last of the top four features that I prepared and this is a go fix comment. You may have heard of it. It was very useful in the early years of Go. It is the tool that uh runs some heristics on the code and improves it. Uh but unfortunately as the go moved on the go fix command kind of stayed in the past and was not used that much until uh the go 126 release. So now it's making a wonderful comeback. It is now implemented using the same back end as the honored govet tool. So now we have two modern tools in govet and go fix. They uh use the same analysis framework so-called but they have two different purposes. So go vet it points to the possible problems in your code. So you can review them and see if they really uh are problematic or not. And go fix has the different purpose. It modernizes your code. So it u modifi it modifies it in such a way that it starts uh using modern language and standard library features and it rewrites the code automatically and the go team says that all the changes that go fix makes are safe. So basically you don't even have to review the changes. You can just make it a part of your workflow for example CI and uh you can be safely sure that the changes are safe that you are staying on the current uh language features. So here there's probably no surprises. It is just a common line tool. You run g fix you uh specify the package or packages and you can uh optionally uh tell it which uh heristics or so-called analyzers to run and each analyzer is responsible for one particular refactoring. Uh here we can look at uh for example just one. This is a kind of old uh fashioned uh working with sync weight group. Here we manipulate the weight group counter by hand. So we increment it by calling add method and we decrement it by calling the done method. Of course, in the modern go versions, this can be replaced with a go method which just uh does add and done automatically. So if we run a fix with a specific analyzer which is called conveniently weight group, it will do this refactoring. It will rewrite the code here. I also must say that probably Goland very soon will support all these refactorings. So you may not even have to use goix you know by hand running it in the CLI because the ID would do that for you. But still I think you can just make it part of your precomit hook or something in case uh someone decides to contribute to your project using notepad or vi. So it still can be useful. Also the go team says that uh going forward it will be easy to create your own analyzers specific for your project which you can also run with goix. Currently, it seems it it's it's also it's already possible. It's not that easy to do, but it probably will be easier in the future. So, that was go fix. As we have already discussed, there are really a lot of features in Go26. We will see some of them very soon. And uh I also want to mention that if after the event you would like to read about uh these features more you can visit this link this is a link to the interactive tour of the go 126 release. Thank you very much and if there are any questions I think now it's great time to ask them. >> Thank you very much for sharing this. My personal learning is that Go has a go method. I think that is hilarious. I'm well aware I have the humor of a three-year-old, but I still think this is hilarious. All right, coming to the questions. And but actually before we come to the question, I have a quick word from our sponsor uh because there is a code in the chat this way. There's a code in the chat. That code is for a six months trial license for Goland. So if you want to play with Goland 1.26, that is the code. Uh so now anyone who might have joined late this is what this magical code that is floating around is all about. Okay coming to the questions. Um I in the beginning uh before I prepared for this uh event I was thinking of making a rust joke but then I put that just aside like the professional I am. [laughter] But the question that came up in the chat was the Rust language is seeing an increase in the use of cloud application. Does this somehow influence the GC of Goland 1.26? Anton, what you think? I think the main influence is just modern computing in general because now we have servers with uh many cores and uh the old design where we spend a lot of time on basically CPU cache misses is very inefficient for this multi-core machines when we have you know like not four or eight cores or like 64 cores. So the go team was definitely seeing that the garbage collector could perform much better in this high high performance computing so to say scenarios. So that was clearly one of the main motivations for the new garbage collector. >> Thank you. There are more questions to the garbage collector. This seems to be a very hot topic. Um I think you mentioned this in the presentation uh but just to reiterate how big is the difference compared to the older garbage collector? Well synthetically speaking in the go uh release notes you'll read something about 30 to 40% more effective for applications that allocate a lot of small objects. But according to the uh real world tests that I have seen if we speak not about the garbage collection in isolation but in the overall uh CPU usage of your application if it really allocates a lot if it's constrained by allocations you may see something about like 5% CPU usage reduction in general if your application is not bound by allocations you will barely see any improvements at all I think but uh you know don't trust me on my word just uh run it with your application and see according to your observability matrix how things change it's the best way to do it. Fair point. All right, last question and then we keep going. Um actually never mind two more question. Two more question and then we get going. Um all right. If a map or strruct uh if a map obstruct where millions of objects are created every minute and older garbage and older objects are deleted, will this garbage collector help? That sounds like a very specific question. Or is that like an everyday go developer kind of thing? Yes, it may very well be will it very well may help if these objects are all go to the heap. So if you run the um escape analysis and that you see that all these objects are in fact allocated on the heap and not on the stack which is probably the case if we are talking about this huge map then yes uh and these objects are not u they are small so they under half a kilobyte then yes you can see improvements with a new garbage collector. >> Sweet. All right, last question for this blog. I already asked that I already asked that question. That question was twice in the chat. Easy. Then we have space for this question. Um, our experimental vectorized operations coming to ARM or RCB risky sounds like a new uh u the boys season. Um, anyway, uh is that coming anytime soon? This is the uh clearly a question coming from someone who is following closely the CIMD developments in go. So unfortunately in go 126 we don't have ARM support or risk. We have only support for uh MD64 platforms. So the go team well I must say that I'm not on the go team. So uh I can't really say for sure when we will get uh other platform support but uh from what I've heard uh the go team wants to get some feedback from the developers who will start using SAMD uh victorized operations on the uh ARM 64 MD 64 sorry and uh looking at that feedback uh the go team will probably broaden the support for such operations and we will see uh other platforms I hope in the following releases. >> Cool. All right, that was the last question. No more question at least for this time uh because we're already behind time and my inner German is yelling at me very loud in my head. So Anton, you got another blog prepared for us um which is more a little bit more workshop driven. So a little bit more live coding. I'm really excited to see some go code. Not that I that means a whole lot to me, but I'm super excited. So, please take it away. >> Okay. So, hello everyone. It is still me. So this workshop we'll be doing together and uh I will just take this opportunity to tell you about some more go on 26 features and we will start with a feature which is called the secret mode. Now I must tell you that this is this feature is related to the cryptography. I'm not an expert in cryptography by any means as I uh suppose are many more of our listeners viewers but I just can't leave it out because of this cool name. Which language really has a secret mode? Okay, so what is a secret mode really? A secret mode is just a package which is uh conveniently called runtime secret which has a single do function uh which you can pass an anonymous function and execute what's inside and um what it does secret do is when you leave the scope of this function it will automatically clear the CPU registers and it will clear the stack for all variables allocated on the stack inside the secret do scope and also if uh any variables were allocated on the heap inside the secret do block the garbage collector as soon as these variables become unreachable it will also clear the memory and by clear the memory I don't mean free memory so free memory Mory in terms of go runtime is just marking a specific memory block as free so it can be rewritten with another data. In this case we actually will have the runtime zero out the memory. So the old data that was there is absolutely inaccessible. So why would we go through such trouble? It turns out that um it turns out that um it is uh needed for secure uh crypto communications in the modern protocols. So basically um I'll try to simplify the the whole process here but basically the essence is this. If you have two parties which uh communicate with each other and they encrypt data, so they do it in sessions. So they must uh decide on a session key and they use this session key to encrypt the traffic inside the session. But the modern approach to a security dictates that even if the private key for example from one of the parties will be leaked, the attacker should not be able to decrypt a past sessions even having the private key leaked. And to do that we have to for each session to generate a temporary keys from which we then derive the session key and we use this session key to encrypt and decrypt the data. So I hope this is uh not too you know indirect and um the thing is previously uh we can see here uh on the screen that uh this is what we do. So we have a private key. This is the temporary private key. We create the shared secret which is also temporary. And then we derive a session key and this session key will return it from the uh function. So it is later used during the session. So if we do just this without the secret do function, what will happen? These two temporary variables they are not needed but they will stay in memory physically. They will of course memory will be freed but they will stay in memory and if the attacker will gain access to the memory they can see these values and they can derive the session key from these values and they then they can decrypt uh the session data which is not what we want at all. So the secret do it protects from this particular scenario. It guarantees that as soon as we leave the scope, these two variables will be zeroed out. As for session key, since it leaves the scope, it will not be cleared yet. Of course, it's needed for the session, but as soon as the session ends and the garbage collector will see that this uh variable is no longer reachable, it will also zero out the memory in the heap, which is very convenient. So I must say that this function is mostly not for regular application developers. It's u mainly for the developers of the cryptographic libraries and they will use secret do inside their libraries and we will just use uh the result of their work. So you will probably not encounter it much in your day-to-day work but I just felt the need to tell you about it because uh secret mode is a really you know nice feature to have inside library. Okay, moving forward. The next thing I wanted to talk about is guine leaks. What is gorin leak? Essentially it is a situation when a guine is stuck on some kind of uh synchronization object like a channel and uh while it is stuck other guines in the program continue to work and the program as a whole uh continues to function. So we have this stuck guine and we have other guines working just fine. This is uh quite a a bad situation because uh leaking guarantines mean leaking resources and sooner or later your application as a whole uh start seeing negative effects of this. But uh leaks can be tricky because unlike uh other concurrency related pro problems like say data races or deadlocks they are mostly invisible. Uh the deadlock uh as you know it results in a panic. So if you have a deadlock in your program it will panic you will have a stack trace and you can investigate. If you have data races, you can enable go's uh data race detector. Thank uh thank to thanks to go team for having it. And you can also track such races. But the leaks remained invisible for quite a long time. The situation started to change with the Go 125 release when the Go team released the sync test package and the sync test package can be used to track Corin leaks during testing very efficiently. I highly recommend you to try to do it because it it really works well. Of course, you can't use uh sync test in production. That would be really weird. So in this release go 126 we finally have a means to track Gin leaks in production. Let's see the example. Okay. So here we have a a function which uh creates an output channel and returns it uh to the caller and also it starts an internal guinea and it sends a value to this channel. As you can imagine, if the caller the main function in our case, if it doesn't read from the output channel, this internal guarine will be just stuck forever. So this is an example of leak. Of course, it's a pretty simple one. We can detect it just by looking at it, but not all leaks are that simple. So now we can uh track this and very um and a lot of other ones a lot of other leaks just by using standard runtime puff package. So uh runtime pr has this function called lookup which you can use to uh gather so-called profiles. You're probably aware of the CPU profile and memory profile. They are often used to diagnose problems in production. Now there is another uh one uh another profile which is conveniently named gurutin leak and it does exactly what it says. It uh demonstrates if our code uh has any leaks. Let's run the program. Uh here I just print the uh output to to the I print the profile to the std out and uh it reports that we have two guines. Basically the guine number one it's the main guine not that interesting and another one look at what it says it says that this guine is leaked and it's leaked on sending to the channel and also it even give us a specific line which is uh this exact line which we predicted that will have a leak indeed it has a leak. So as you can see pretty uh easy stuff you just use runtime p prof to collect gurin league profile you analyze it uh either by just reading uh the uh text output or just or by using the p tool um this is an experimental feature but it's not experimental because it's unstable the algorithm itself is the production ready And uh the go team explicitly states that the only reason is uh why it's uh currently in the experimental state is that the go team wants to gather feedback on whether the API is convenient to use. So if the people basically like to obtain uh to obtain the profile these profiles using the standard people of lookup functions as far as I understand so that's the only reason so I encourage you to uh try it in production in your apps to see if you have any leaks. Okay, continuing with the go routines. Um so there is this thing in the runtime which is called runtime matrix. Basically metrics are stats about the runtime health uh how it uh functions. So the runtime matrix are numerous and uh there really uh are a lot of them but u there were not much metrics related specifically to go routines. Of course we had a metric uh which uh returned a count of uh gorins uh currently alive. So the overall number of live guines but that was it. So now let's see we have much more information. So we have the numbers number of guines which is in each of the states u in which they can be. So we have the number of guines which are running sys calls or sego. We have the number of guines which are runnable but not currently running. We have the number of guines which are currently executing and finally we have the number of guines which are waiting on the synchronization objects. Why should you care at all? Well, because by looking at these metrics and specifically at their dynamics over some time, you can spot problems even before you can start seeing them in other observability tools or in logs or you know just from the user complaints. So for example, if you see that the number of runnable go routines is steadily increasing over time, this means that even even if your application doesn't show any signs of problems, it means that probably something is not right. either the load is too big and your application just can't process the guarantines in the same the same frequency as they created or maybe there is a a bug on the new release that you deployed that creates excessive guines. So that is why these metrics are good to have uh on your you know your dashboard on whatever observability tool you use and uh probably you will not use the uh runtime matrix package directly probably you will just use the Prometheus driver or hotel or whatever tool you're using and they will collect all these matrix but it's good to know for you that they exist so that you can uh actually look at them and not just uh you know uh think that there is still the only uh metric which tells the number of live guarines. Oh and as for threads we also have this uh new uh metric which is the total number of live threads. Of course you know that the threads are uh the engines that execute our gorins. So it's also good to know. Okay. Ah next thing uh this is uh not really a big thing but I wanted to show you because uh because it's related to iterators. So iterators is the feature relatively new. It's probably the the most new language related feature in Go and um after they appeared in the language the goating started to add iterators to the various packages of the standard library. So over time even you if you don't follow closely uh the evolution of the standard library you may not know this but over time we have more and more and more iterators in the standard library. Uh it is good to know if you like the design of the iterators because now you can use them uh in your code instead of the old approach with uh explicit uh looping over uh the lists or just passing functions. Okay, so let's uh look just for shortly into the new iterators in the reflect package. So here we have the HTTP client type and we want to see all the fields which it uh contains. So basically we can just use the for edge now on the fields method and if we run it we shall see. Yep. we see the fields of the HTTP client type. This is probably the only uh type which has a jar of cookies. Pretty cool. Okay. Similarly, we can uh have we can list all the methods and also we can list all the oh oh not good. We can also list all the inputs and outputs of the function type. And a function type of course is just a type which is um declared as a function. So here uh the walk funk in the file path package is just a function with three input parameters and one output parameter. And if we run these two ranges over the ins and outs uh methods which return uh iterators, we'll see exactly these three input and one output parameters. So again, it's not a big thing. I just wanted to highlight it um to show you that iterators continue to spread through the standard library. And if you like this design, it maybe makes sense after each release to look at the standard library to see where they appeared so that you can start using them in the in the packages where they appear. Of course, if you don't like iterators, some of us don't like them, then you can just ignore these changes. Okay. Ah, VS log package. Okay. So uh this one is probably very familiar to you. Uh some time ago uh the go team added uh logus lock package to the standard library. It was just really a blessing because started with that moment we don't really need to use any third party uh login packages. The slog package has all we need. We can just take it and use it. But it turned out it was missing one specific feature that some people requested. So a common scenario, not for everyone but for quite some people is to log to two outputs. For example, log to the std out and to to a file which is what I'm trying to do here. So I created an std out handler and the handler of course is the thing that uh uh processes the incoming log records and sends them somewhere in this case to the std out and uh there's another handler the file handler which logs to the uh file log in the JSON format. So previously if I wanted my uh login 4 or log debug or log error to go to two uh destinations I couldn't do that uh easily. Of course I could some people uh used the IO multi multi writer in this case but uh it has some job back so we we won't uh you know talk about that more. Fortunately, now we have a so-called multi- handler. So, basically, this is the handler which wraps other handlers. I can specify how how many as many handlers as I want. And then when I call uh the regular logger uh methods, the multi- handler receive these records and it routes this record to each of the handlers and each of the handlers behaves just as it should. So it processes these lo records according to its internal logic. So if we run this example we will see that uh we have this info record in the std out and also if we okay if we see what's in the log file we also see the same record but in the JSON format. So basically all thanks to multi-andler we can uh route our log records to multiple destinations. Pretty convenient if if you are into uh this kind of thing. I know that some people just say you should only log to std out and uh uh let the scraper tools handle the rest. But you know there are different situations and logging into multiple destinations can be in fact uh beneficial. So the next one is related to the buffer of bytes which is a type located in the bytes package. Now the buffer of course is the thing that uh wraps uh certain number of bytes and which can uh read them using the read method and other methods of the bytes type. So here we have a buffer containing of this byte string. I love bytes and um previously we can read for example we can read the first bite and then we can read these four bytes and then we can read these five bytes. Uh this uh this is you know was added to the standard library long ago. So nothing new here. But now what we can do is we can peak at certain bytes without advancing the buffer. meaning we don't change the current position inside the buffer. So for example here what we are doing is now we are initially at the uh very beginning of the buffer. So by calling peak I can look at the first bite and by and at the same time I do not advance the buffer. So it's it stays at the beginning but I but I can gain access to this uh data. So if we call it we can see that indeed this is the bite I okay let's now let's now advance the buffer two bytes more. So we do one oh oh no like that one two. So now we're at the beginning of the word love. And if we uh call peak four more bytes, we shall receive uh love bytes. Love bytes sounds weird. Yep, here is it. Now, an interesting thing about the buffer pick method is uh what it returns. So, a sample as you can see it's a slice of bytes, but it's not a copy. It is a view in inside the buffer. So nothing gets copied here. And uh if we change this uh slice, we will in fact change the underlying buffer. Let's see how that works. So we change the bite number two inside this uh slice which is the bite O. We set it to zero. Let's run again. Yep. We can see that when we uh read the data, we see the changed um the changed byes. This is a rather weird feature. So maybe be careful with it and don't uh change the slice. Uh unless you have a specific reason to do that because honestly if you are just picking at the data, why would you change it? I don't know. Probably don't do that. And the last one but not least uh maybe this is my my favorite uh feature of the release. It certainly is one of them. So the story here is as follows. Um as you all very well know we have two ways to create an error string in go. If it's an unformatted um uh error meaning it's just a plain error string without any parameters we can call errors new or we can call fmtrf and uh some people uh have noticed that errors new is marginally faster and it allocates less than fmtf. So uh their recommendation was to use errors new instead of fmtrf whenever you want to create a plain string. Now this is a rather weird recommendation because I hope that uh pretty much all go applications exist not for the reason of creating more errors you know. So probably the error creation is not in the hot path of your application. So it doesn't really matter if ftrf is slower than error is new if the whole meaning is not of your application is not producing errors. Okay. Uh so but still because uh go developers you know they care about uh performance uh this was very important for some people. And finally one of the go maintainers uh got tired of hearing that FMT or ref is slow and they improved it. So now it's as efficient uh as a new and uh since this is of course a small improvement but I think it shows very well how the go team you know cares about such things. Small things, they really mean a lot for us programmers because, you know, adding huge new features can be easy and they make product management happy. But what makes us happy is that the team really cares about the language and the standard library. So, okay, I'm going to show you the implementation here. So the clever thing is the first thing the area does is it uh calls this internal area function. So here it is and in the very first line it checks if the format string has a percent symbol. So if it doesn't it means that the error is unforatted and we don't need all this heavy machinery. So it immediately returns new and then it just delegates the work to errors view. This is really a simple design which uh allowed error f to allocate uh as much as errors new uh does which is one allocation basically and it's also as fast as errors new not that it matters much but it's good to have a fast FMT error I think so this is it from me if uh there are any questions, I'm happy to answer them. >> Sweet. Thank you so much. And as much as I love online events because it's very accessible to anyone. It misses a little bit of like this direct feedback that you usually get when you're like on a stage. So, if I could get like a little clapping emoji in the chat for Anton, that would be much appreciated. There are two questions in the chat that I want to go over real quick so that we don't uh take any time away from Alex. So uh I think I missed that part when reading about it. If the data being that was about uh when you were talking about the secret mode which is like from a product perspective such a great name. Um if the data being allocated on the heap [snorts] is being zeroed if memory keep on being allocated and never being freed could it lead to memory leak? Well, if the memory is allocated and never free, this is by definition a memory leak, I think. So, it's not related to the secret mode in any way. It's just it's just a memory leak. So, um I I'm trying to think what what what was this supposed to be about? So, maybe maybe the question is about what happens if the memory is leaked. will uh will it be zeroed out? Well, no, because it will be only zeroed out if the garbage collector decides that the memory is unreachable. So, if the the variable is no longer used. So, it must first decide that the variable is unreachable uh the data is unreachable and then it will zero it out. So, first unreachable then zero out. >> Cool. Thank you very much. And last question before we go over to Alex who can now mentally prepare out of his little like uh sleep. How about performance of slo for example in comparison to zap which I assume are libraries that everyone here is familiar with except me. [laughter] >> I must say I'm not familiar with the the zap's performance. >> I I appreciate that makes me feel so much better right now. That that is very very empathetic of you. I [laughter] >> so I I honestly I just use a slo and I'm happy about it. So that that's that's my point of view. >> Okay. Because this answer was a little bit faster than I anticipated. I squeeze in one question. One last question. What's string slide? Is that something that ca I I don't remember the context or where it came up. >> I don't know. >> Okay. >> What's strate? I don't know what it is. >> Okay. And maybe um Alv Pro, if you could respecify, let us know. We'll get this sorted out somehow. Maybe, maybe not. We'll see. Anton, thank you so much. This was great. really appreciate you swinging by for this, >> Alex. >> Sure thing. Thank you. >> Are you ready? >> Yeah. Yeah. Yeah. >> Okay, let's do this. I'm excited for your demo because that is kind of the question I had at the beginning that I wanted to tease her, but I don't want to take anything away from your talk. So, please. >> Thank you. Amazing rundown of features for Anton. So now uh let's see how can go length can keep up with go 26. So this code base is a simple code base to prove the point is is a service health checker and the main idea is once we have some HTTP endpoints we will call them every time we call the the check end point of this application and basically we get all these targets and fan out for the the the final endpoint like GitHub and Google as this example. So this application is written based on go 25. So it's time to modernize it. But everyone is very very anxious about how it perform in 26 because of the green tea. So first things first, we can just drop the six here and everything will be fine because you need to have the go 26 in your machine. So the first thing in the ID is to look at the go root. In my case, I have it, but probably if you have older versions, you need to swap this. Since we are fine with the go route, it's time to just bump the version. When you bump the version, you can notice that a very snap snappy uh suggestion to analyze the for the syntax updates. And that's very nice because I don't like the my tooling in my editor drag me and or slow me down during the development. So if it's snappy, it's good enough. But there is another option here about the highlights of the go 26. And here you can learn more about the versions that this version specifically um some of the things Anton already addressed. But the point is when you came when you go for this features uh page you have another explanation what is the older version what you can expect in the new one the proposal if you are in the how historical these things progressed the changes if you need to see the the the commits the p request and something like that obviously the alter because people need some respect for that and they go dev to see the official docs of you but you you might be wondering okay but I can see all these links by myself but you can you can see as a tutorial so when you try out these things I don't will do it today you go to a new temporary project to experiment by practicing and learning what this feature uh is capable to do and you can mess with the project. If you mess a lot, you can just roll back the things and try again until you learn this stuff. So you can do it for various of the highlight features here, but I will show it. So you don't need to go to the tutorial right now. I prefer you don't go to the tutorial right now. Stay in the live. So uh this first hint is analyte code for syntax updates you can see the whole project you could be in a monor repo this project is not a monor repo so the module will will have the same effect so let's analyze the syntax for updates his panel here in the bottom have a couple of syntax updates in total is 11. But what is this is 11? You could see in the tree. But the summary is you can uh fix the pointer creation of new and the type unwrapping using the errors as type. If you can go directly on the files or just apply all fixes for all the three files and try to solve by fixing the eight problems found. I don't know if this is a good idea all the times because you could be fixing generated code. I don't believe you should change generated code because it should respect how it is generated. So in this case let imagine this code is generated just excluded and just go but it's not the case. Uh there is another point I see this main file but I don't know if the fixes is part of my change. So if it is it's okay to apply for every uh element in this list. In this case, we are having a status code and here's a timeout and my kind of change is the same concern. So, I'm okay to just apply for it and take a looking for more. It's kind of similar. It's regarding timeouts and modernizing it with the new expression function and for that in in this test is similar. So I'm okay to just apply for every single problem we can find here. Okay, it's fixed. For the checker, we have several instances of error s being translated for sype. This one I just apply blindly because effectively is objectively better. Okay, we have the the code base fixes fixed by these two categories of fix. Uh I like to take another look and the commit panel to see oh what the changes made. Okay, it makes sense to me. But was as we mentioned in in the previous block with Anton, not all fixes are ready right now. So we can do the go fix by ourselves and don't lose this possibility. In my case I'm use M taskus for that because I don't I don't buy make anymore only if I oblige to use I I give no stay away make. I don't use make files anymore if I can. So I use Miz for control versions and and tasks and for this I'm first using my M tasks to see the differences and the go fix and what difference we catch here. The very first one is we are using sorts by the package sort and now modernizing it to use lights to sort is the first one. The second one in the diff is we are using split but splits kind of duplicated. You should use a split sec of sequence and for range you are maybe four versions from now you you can range over integers or any single element. So you don't need to do this C style for loop for this simple things. And the the last thing on the diff is kind of similar just in in another place in the code. So I'm okay with that. I don't want just fix now. I just to run the go fix. Okay, it changes my my file system and that's okay. I want I want that. If we want to check again, we can just take another look in the project and see the changes. But go fix it's not uh a type of thing that hit and miss every time. It's very reliable. It's not um stochast stochastic like LLMs is is deterministic tool just use it everything will be okay my just uh one recommendation is to def first to know what are you doing but is this is the part when we are applying the fixes about the syntax updates on go 26 six and we are in 26. We probably want to know how fast it goes for this program in particular. And remember benchmarks are very sketchy thing to do because noise on the machine um how much iterations there are several things that make your benchmark variate the results. So I will run the benchmarks here but benchmarks take a little time to run. So I will run it continue and after that we can take another look in the results. So let's run the benchmark fast. The idea here is we have in our test we can test for that. Our bench for that is We create a test server to receive all these requests because we needed to emulate our targets, our remote targets. We create a new checker and the number of checkers in this test is 500. 500 for the same backend URL and no logger for that because logger is he here is just noise again. >> [snorts] >> We run the first time just for a map because I don't believe benchmarks that runs when just single time or with the machine totally cold and the process to force the GC to have much more pressure here and it's intentional is to save the history of checks just to create an array and force again because the compiler of go to smart for test. So it could uh optimize everything in this case. So we keep alive this don't kill this slice. Please keep this in memory. Let's create pressure. And the rest is basically sorting the results and doing uh math for calculating the P p5 and P99. So now you can know more or less what this benchmark do. We have uh it running in the original GC 426 the green team setting the experimental to use the old one and compare with bench stats to have statistically relevance and it takes some time. starts running. I come back later. Um, okay. If we take this application like our home server application that goes and check if my torrent or something like that it's online we probably don't have uh they have lift of kubernetes um graphana hotel and something like that so the logs should be written in in uh place or we not storing those logs for our home server we can do that. But in this case, it's just a text handler that goes to std out. You can grab it. You can do stuff with it. But it's not very uh usual to do that when you have a simple program or just a single instant machine. So when I see this idea, I have uh another I don't know good or not idea to create or mimic the flight recorder from grow funo and flight recorder is from the the category of profilers that instead of accumulate all the uh profilings being done, it creates a ring puts the the profiles in ring and it's and when this ring is full, we rotate it and the same samples will stay there. But I don't want to lose my capability of send it to STD out. I want to do both. So to not do the same thing again that oh I write this in in text, I write this in JSON and that is my moti. I'm using this kind of pet project program that mimics the the flight recorder, but this time it's not recording metrics of the profiler. It's recording logs. So the idea is to import the log flight recorder. The name is very suggestive. You can find it easily. So there's a problem here because this project don't know this library. So with the is very easy to solve that you just uh show the the context action is already contextualized and it's as simple as press enter and the the way of fixing the dependencies is running the go tiding in the background. It's running. It's already sorted the import. And you can see in the go mod. It's here. I don't need to introduce it uh by hand. And now you can use it. But before using it, we need to use the multi- handler to have more than one. Okay. What what is is expecting several handlers? We we already have one, right? So this here is a new text handler. So let's extract it and call it text handler. If you don't change anything in this program and use the multi- handler here and pass the multi- handler nothing in the behavior should be changed and everything continually works. You can see it's working before just progressing. So uh let me oh let me uh continue instead of uh living with this problem. Okay this is the the library and have basically two options how much to retain in this buffer. In our case it will be very easy to observe maybe five and many many options. Uh let's keep things simple. Oh, give me names please. Okay, AI. Thank you, AI. So, this is a log recorder. We have the text handler and now the log handler. We have announcements. Where is my benchmark? I don't know. I can run into it later. So we have two ways to observe logs just reading the std out or looking for the recorder but this this is not being used in any place I can create that but first let's see what we have now so we can run the application what's happening it's not okay let me again. What? I don't know what's happening, but I can see it in a couple of moments. Let me continue showing the features. So, I I I'm very used to uh use live templates for that. So, when I need a new handler, I probably won't write in by hand. In this case, I want logs. Uh, click debug. I think it's a better place logs. And I also want to see it as a JSON, but I'm not a a guy that's like copy and paste this much. So, I also have a live template for that. And now you need to do anything. In this case, I want to access this recorder to show me the ends. In this case, now our number is five. The latest five. So we have the log recorder and you can write two and it needs a writer and response writer is definitely a writer. Okay, I don't want to know the number of bytes and I'm always treating the errors and handling it. For this case, I'm only writing on the header. That is a problem because if I can't write in the logger, I have a problem with the log. I don't create this circular dependency. Okay. What we have here, we have a way to see the the last five log records available independently if the application are receiving 1 million or one request by second. So let me try to fix this little bug of running the application. Let me check. We are in 26. That's okay. Let's try do the main once again. Let me see the configuration to make sure nothing is wrong here. I don't think so. It's not there's something wrong. Okay, I can fix that a little bit later. But the point is it's very easy to create the handle funk by using live templates also the the content type and now you have the the log recorder to continue that before we using the HTTP client and showing the the logs. I think we are in 2026 2026. Yeah. And I want to focus all my development efforts and things that need to think think deeper and not in tasks like u routine tasks boring tasks things that I can delegate. So for the debug metrics as we we saw in the in the previous section it's okay to use in an application like a home server but in a enterprise setting you're probably using it by permuse or open telemetry collector or something like that but for me I don't want to write things like that and I will delegate it for journey for sure that it's 2026 AI is everywhere. Let's use AI when it's helped and don't get in my way. So how can I achieve that? I think it's simple. We go to journey um and we ask to implement exactly this endpoint uh implement the endpoint and the name of the input and the gods of AI will bless me with the right answer. It's basically there's no planning because this feature is very very small. There's no much to do here. And the most important part for Juny um make it right is the quality of the comments and the to-do. So it's trying a lot of stuff like editing go probably it will try to test by itself. What I think it's good but for some people it's kind of in the way. So if you don't want the LLM be that intelligent so ask in the in the prompt implement but do not run test but do not asserting things. I prefer it be autonomous as it can be because it's a very very easy task. So let let us see the diff it's important metrics and instead of just my to-do when go 26 released implement this endpoint okay it's released and it's the name of matrix everything's okay the name I want to show text plane Perfect. It's what I expected. So, that's good enough for me. Um, but here it's a kind of more of the goal that we know. And what about the 26? Because this is the main point here. AI can keep up with 26. he can write code that works well and modern enough in terms of syntax for 26. Let's try another example. So I I already done one of my to-dos. What about more to-dos in the project? Let's see. So we have two more. One is implement a sensible defaults for HTTP client config and the other one is about error status. Okay. Uh I'm go with that. The question is Juny can do it can write code that is syntax syntactically updated with 26. Let's try. So the the the method it's called new config. Let's access Jun again. I'm okay with that. new task uh implement new config uh from what's the file config go here's the challenge a real challenge it will write like a dinosaur in 2018 or like go 26 o tips it's already using new that's good Yes. >> Uh, quick heads up, Alex. We're kind of hitting the end of the live stream, so you have three minutes >> to >> Okay. >> for this. Does this work for you? >> Yeah. Yeah. Yeah. That's right. >> Sweet. >> Cool. So, it's not using the old patterns. It's using already the new the other pattern. We recognize how good is using the handling of errors because we previously have error s now we have error s type but for this time let's check if clot code can do that um this is the name of function implement error status let's What is trying to do? Errors. It's not good. Claude, we can do better. So, Claude is not living the dream, but we can do better because we have a plug-in. Um, in the GitHub of Jet Brains, we have a plug-in for cloud code called go modern guidelines. And the main point is when you use it with cloud code, cloud code will have the same capabilities in terms of go modernizations of June. So you can use the your preferable tool and also use the intelligence of June. For that we need to access the plugins. I have it installed. So I only enabling it. But if you want to know how to do it, there is a very easy installation steps. And now you can try again, but we need to restart first. Cool. Okay, Cloud, that's your time to shine. Um, use modern go and implement the error status function. I like that. I like that much more than ever. Instead of just stamping one error cla now you know how to do it in the way that don't get in my way on code reveals just applying it. That's good. Looks good to me. It's just refining some stuff. May probably it's import okay cloud you can go. Bye-bye. So that's it with the go plugin the go moderniz plugin from from jet brands. You can also do the same quality go code that keep up with 26 and using your preferable tool. Uh I think and that's it. We have time and if you have any questions or or about the code just hit me. >> Sweet. Thank you so much. >> Um I will just be a mean host and we skip questions. Uh but if you have any questions, reach out to Alex on LinkedIn, Twitter, Blue Sky, something like those. I think actually I have y'all's socials on my slide which will magically appear now. Easy magician. Um while we wrap up things, one thing I would be curious is if there's anyone out there that has already running Go 1.26 running in production, put it in the chat. This way everyone can pay you a little bit respect for being very brave. Um our marketing is always super interested in like uh analyzing all those things. So if you have any feedback for us, how we can do those things better in the future uh scan those code. Uh all feedback is good feedback. So just put in your comments and we'll make sure to do this again in one way or another. Yes, there are all socials. So if you have any questions about Alex uh talk, reach out to him. I I just put you on the spot now. Sorry for that. But [laughter] um yes, I think that's it. Don't forget about the Goland code. It was in the chat a couple times. So really appreciate you all joining us today. This was a lot of fun. I had a good time. I hope Anton you had a good time, too. Thank you so much again for joining us. And I I had I was about to forget the usual YouTube thing. uh do the like and subscribe thing and like you know whatever. Thank you. Appreciate you. See you next time. Bye. Bye. Bye. Thank you. Good.
Video description
► Want to try GoLand? Download now: https://www.jetbrains.com/go/ 🎁 Get 6 months of free access - use code: GoReleaseParty 📚 Go modern guidelines: https://github.com/JetBrains/go-modern-guidelines In this recording of our livestream, you'll see Go experts break down what's new in Go 1.26, why these changes matter, and their impact on real-world Go development. Anton Zhiyanov, Go educator and creator of https://www.antonz.org, takes you through an insightful deep dive into the key updates, showcasing live coding and practical examples. Alex Rios demonstrates how GoLand supports Go 1.26 from day one, helping you smoothly adopt the latest features and improvements. Enjoy technical insights, practical demonstrations, and engaging discussions. #golang #golanguage ⭐️ Our resources ⭐️ GoLand: https://www.jetbrains.com/go Product news and tutorials: https://blog.jetbrains.com/go/ GoLand on X: https://x.com/GoLandIDE/