We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
The Linux Experiment · 13.4K views · 1.6K likes
Analysis Summary
Ask yourself: “If I turn the sound off, does this argument still hold up?”
Moral framing
Presenting a complex issue with genuine tradeoffs as a simple choice between right and wrong. Once something is framed as a moral issue, compromise feels like complicity and disagreement feels immoral rather than reasonable.
Haidt's Moral Foundations Theory; Lakoff's framing research (2004)
Worth Noting
Positive elements
- This video provides a concise summary of the 'Unified Attestation' API which is a significant development for mobile privacy and degoogled hardware.
Be Aware
Cautionary elements
- The use of moral framing to categorize software development tools as 'ethical' or 'unethical' can lead to community gatekeeping based on personal preference rather than technical merit.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Transcript
Hey everyone and welcome to this week's edition of the Linux and open-source news show. This week we've got more open-source projects including AI generated code, namely Kwin, Lutris, and also the AMD drivers. So we'll talk about the implications of that, especially if like me, you're not a big fan of AI in general. We also have Souza potentially being sold yet again, which could also have some implications. We have a new effort to bring the Google Play integrity API in a new open-source form that apps could use, which would unlock a lot of stuff for deoogled ROMs. And of course, we also have a ton of other Linux and open source related news, plus this message from our sponsor. This video is sponsored by Squarespace. They are your all-in-one platform to create, publish, and manage your own website. They have really super easy tools to create your online presence whether you know how to code or not. So they have what they call their blueprint system which lets you pick from a variety of templates that are pre-built and will suit any type of website whether you're looking for a simple blog, an online store, a video platform, whatever. All these templates are also optimized to give you solid SEO. and they even have a bunch of SEO tools that you need to make sure that your website doesn't end up in the last page of Google search results. On top of that, to go further, Squarespace has their own design engine to create your own pages. You can just drag and drop elements where you want them. And you can change the colors, the fonts, and just tweak the template however you want, or you can automate all of this. And once you have something you like, you can add extra features like creating your own online shop with a payment system that handles credit cards, PayPal, Apple Pay, and more. You can even design your own logo from Squarespace and book your own domain name. So check out squarespace.com/theinxperiment or just click the link in the description and you'll get 10% off your first website or domain purchase. So AI generated code is making its way into more and more open-source projects. As of this week, some new patches, for example, for the AMD drivers proudly display the code developed by Claude Sonnet 4.5 mentioned. These patches are here to address HDR improvements, color management improvements for AMD GPUs, but they are also patches for KWIN, which are also co-authored, as they say, by Claude. Apparently, the AI tool even developed a plug-in that is mostly used for debugging to show the surfaces being rendered and their GPU offload, which will probably help uh with making more improvements in the future. The developer says that Claude wrote all the code, but he says that humans should always review what they get from a large language model that they should steer that code in a place where it's actually workable and integrates well with the codebase and they shouldn't just throw AI slop code at maintainers because of course your name and reputation as a developer are on the line when you do so. He also says that calling these AI is not really good. Those are LLMs. It's not the general definition. the accepted definition of AI. He also says that those tools are interesting in understanding large code bases and are really good at making code that fits in those code bases. Another project that is now using AI is Lutris. People started noticing a lot of comments to the project that were LLM generated. They asked if Lutris was turning into slop and the creator replied to that saying that it is only slop if you don't know what you're doing or if you're using bad LLMs. He says that if you have 30 years of experience and you use the right tools, then those things are a real help and basically let you do stuff that you wouldn't have been able to do otherwise. He also added that the issues with these tools are caused by our current capitalistic culture, saying that it's not AI as a whole that stole the components that laid people off, that stole content off of the internet. It was the companies building the AI tools. And I don't want to blame anyone for using AI tools to generate code. I have good friends that are really good developers and that are using AI for that. A lot of open source projects are being written by volunteers and AI is sometimes the only way they can manage to write enough code or enough features to have their projects move along at a faster pace. So there's that advantage. But on the other hand, saying, "Hey, it's the fault of companies, but I'm still going to give money to these companies to use their AI tools." That seems hypocritical to me. Like you can't say it's the fault of these companies. They are the bad guys, but I'm giving them money because it makes my life easier because you're still voting with your wallet. You're basically telling these companies that what they're doing is okay, which I don't think it is. AI is one of the most unethically built software by just using content that is publicly available but not freely available. There are licenses. There's copyright around that. And whether we like that or not, this needs to be respected. And when you give money to a company like Anthropic, like Google, like OpenAI, you're basically telling them what you're doing is fine. Keep doing it. And I don't think that sends the right signal. On the other hand, all of those AI tools are helping Linux and open source grow much faster because in the very limited time people have, they can write a lot more stuff. And if they have the experience to make sure that what the AI wrote is good enough, if they can correct that, then the issue kind of alleviates. There's still the ethical problems, but the quality of code problem is not a real one if the person actually reads what the LLM spits out and fixes it and corrects it and steers it in the right direction. So, I really don't like AI. I really don't want to use projects that use AI, but at some point, I don't think that's going to be feasible. Like, if I have to stop using an AMD GPU just because there's AI code in the AMD drivers, that's going to severely limit my ability to actually do stuff. Now, there's a new EU based consortium led by Vol, makers of the Volola phone, that aims to provide an alternative to the Google integrity API. This would basically let deled ROMs and third party Android ROMs that don't use the Google Play services still have access to stuff like secure payment, like banking apps, like digital wallets, governmental or public applications. basically everything that is currently handled through the Google Play integrity API without having to be locked in to a Google service. This thing is called unified attestation and it would provide the same safety and security checks for Doole Drums but Google free and made in the EU instead of the US. Other partners in the group include Yod Marina or Apostrophe all makers of D Google Drums. Notably absent is Graphine OS. Uh we'll see if they're invited, if they want to join the uh the thing or not. We'll see what happens. Of course, this new API will be open source. They're looking at the Apache 2.0 license to match the one that the Android opensource project uses. Design-wise, there would just be an operating system service that applications can call to see if the OS meets security requirements for the applications operations, like for example, payments uh for a banking app. This service will check against a certificate and you would also have a test suite that helps certifying devices to make sure that they are indeed secure. Apparently, consortium members would mutually check and certify each other's devices so everything is fair and transparent. They say they do not want to have a centralized authority that decides who is trusted and who is not trusted. They just want to make sure everything is fully transparent. Now, this sounds really good to me. applications will of course need to implement support for it which h this is going to take a while unless that initiative is actually really pushed heavily by various EU governmental bodies saying hey this is what you need to use in the EU to have that security thing there's also the fact that they are going to certify each other's devices how does it work if you just want to slap a deleg drum on hardware you already own those device models going to be checked as well or are only the devices that those actors sell be certified. We we're going to see how it goes, but I think this is a very interesting initiative and this could open the doors for a more usable de-led rum ecosystem, which I think is really good. Now, apparently Souza could be for sale yet again. Souza was bought by Noville a long while ago in the early 2000s. It was actually still at novel when I started using Linux. And this came with some controversies if I remember correctly. Then it was acquired by a company that merged with another group. And then it went private through an equity firm that bought them, which is where we are now. And it seems that equity firm is looking at selling Souza for around $6 billion US. They've actually hired an investment bank to see how that could happen. No one involved agreed to answer requests from for comments from Reuters, at least at the time I'm recording this. But if they sell Souza for $6 billion, that would be about doubling what they paid for it in 2023. So, not bad for three years of just owning a company. Now, of course, it seems that everyone is selling their stocks or their properties around software because of AI. Everyone seems to be assuming that AI will just destroy the value of these companies. So, they're trying to sell them off right now. I'm not sure that's actually an intelligent way of doing things. But, that's the usual speculative self-fulfilling prophecy. If everyone sells their software properties right now, everyone thinks that they're worth less and so their valuation just crumbles. Hopefully, if this sale happens, it doesn't result in job losses, in job reallocations, in people not being able to contribute as much as they were doing or a reduction in terms of the project Souza is handling because they have a big big amount of stuff that they're working on. So hopefully this doesn't affect anyone working there. But with every company sale, you never know what can happen. Now, Linux from Scratch published their new version, which is the first one that is systemd only. I talked about that a few weeks ago, but Linux from scratch could no longer invest the time to maintain multiple in its systems and to prepare their guides for everything, especially as the major desktops that most people would end up using like Gnome or KDE start depending more and more on systemd. Kenome is already pretty entangled with it. KD has linked their own login manager to systemd even though the rest of the desktop is still fully agnostic. So Linux from scratch is now systemd only and beyond Linux from scratch the additional book that lets you install way more stuff or compile way more stuff is also systemd only. Older versions are still available but of course the instructions in these older books might no longer work after a few weeks or months or years. But there is a ray of hope. someone who was the senior editor for Linux from scratch and beyond Linux from scratch in the early 2000s. Uh they have volunteered to take over the maintenance of the system 5 init versions of the books. So they might reappear in the future although there is no time frame yet. And if there is one dro that actually has something to gain from maintaining access to multiple in it systems it is Linux from scratch. This is really more a book to explain to you how you can compile your own system. You might not really want to use that on a day-to-day basis, but at the very least, it's going to teach you a lot of stuff about Linux. And so having access to all the options you could use to build your own Linux system is, I think, a really good thing. So hopefully they can manage to build uh the books and to maintain the books with support for system 5 in it and not just systemd. Now, who could have guessed? AI translations on Wikipedia aren't just doing translations. They're also introducing errors and hallucinations in the process. Wikipedia editors decided to stop certain contributors and restrict their ability to edit pages because it became clear that they used AI to translate existing articles from English into new languages. And this did not go well. The translated articles added very clear errors. This was all handled through a nonprofit called the Open Knowledge Association. And they apparently paid contributors to use AI and automate translation work from English to other languages, which is stupid cuz AI is just not doing a good job at this. And it can't even translate without imagining things. So, errors include adding a book name and page number that didn't talk about the pages topic in the first place. Some sources were swapped around. Sentences were added that did not exist in the original article without any sources. Paragraphs were added that had nothing to do with the initial topic. And apparently a large proportion of these paid editors don't really speak English all that well. They didn't really proofread what the AI generated. They didn't add links. In short, it was sloppy work, including sometimes even breaking formatting through bad copy paste. Even worse, the Open Knowledge Association apparently instructed people to just use the output of the AI without changing anything because they probably knew most people that they paid didn't speak English well enough to make sure that what the AI spat out was good enough. Now, the Open Knowledge Association said that they emphasize quality over speed, which does not seem to be true at all. They also added an additional verification step, which is basically feeding what the AI generated and the original page into another LLM to ask it if there were any significant differences, which might help a little bit, but you're still feeding some AI output into another AI. So, chances are it's still going to be pretty bad. And only after that will they have human verification which obviously should have been the very first step. Uh because you just don't publish something that AI generated as is cuz you know it's just never good enough. But to contradict all that I just said about AI, we have Anthropic who apparently detected two dozen vulnerabilities in the latest version of Firefox using their clawed tool. And this includes apparently a very severe vulnerability. They did this in partnership with Modzilla and they scanned 6,000 files using Claude, their coding assistant. This thing ran for 2 weeks and they found 22 vulnerabilities. 14 of them were labeled as high severity by Mozilla. They will be fixed in upcoming releases. It seems anthropic is thus touting their own horn saying that this tool managed to detect all of this without any specific configuration for Firefox or Modzilla Code. It is the out ofthebox version of their model that did so. They said that Claude reasoned like a human being that it could look at past fixes to find similar problems that weren't fixed. It looked at detecting patterns and applying more logic to assume that the things that broke in the past might also break in another part of the codebase at some point in the future. And you all know I kind of despise AI for its many ethical failings, but this seems to be the right use case. like you use the tool to detect stuff and then you have a real human looking at what has been done and you see if this still works, if this is right, if the reports are accurate or not. It's not the usual barrage of slop. It's something that is tailored, mastered, and supervised. And I think that's how AI should be used. It doesn't solve all the ethical issues that I have with it, but at the very least, it seems like the one use case where AI actually works. It's code where there's a human supervising, fixing, and managing it. This feels like the right use case. Still unethical though. Now, we have some work happening to try and improve per game performance in the Mesa drivers, at least for AMD GPUs. Valve developers and developers working on the RADV driver. So the Vulcan driver for AMD GPUs on Linux. They're looking at revamping Drycon for Disconf, which is the system that is mostly used right now to sidestep some OpenGL or Vulcan bugs in different games. Basically, it gives you a suite of options to say, "Hey, this game implements this weirdly, so you can just ignore this property or this extension when you're running it." This work would let dryf manage performance optimizations now with per game profiles and driver configurations as well as shader replacements. And this is generally what Windows drivers are capable of doing on Windows. And Linux apparently lacks a few tools in that regard. It is just an idea for now. It's a proposal for Mesa, but if they manage to implement this, this could potentially open the door to the usual GPU config software that we see on Windows. I think the AMD one is called Adrenaline. This would help overclocking, optimizing, and fine-tuning the performance for each game and program. And people have been asking for that type of program on Linux for a long, long while. Would be really good to get it on Linux. This work would not really bring that software to Linux, but it could open the doors to someone actually writing a good guey around all the options you could have automatically detecting the right configurations and all of that stuff. So, if that leads to that, that's pretty cool. Now, the freedestop.org community is not having any of that age verification law situation. It seems they've closed a merge request that was submitted to the XDG protocols. The proposal was the previously mentioned debus interface that would let applications query the age bracket that the user said that they belong to with categories like I'm under 13, 13 to 15, 16 to 17, and 18 years old and more. All would have been stored locally. the actual age would never be accessible to the application. So that wasn't the worst implementation imaginable. But the free desktop community did not want it because it is first caving in to that sort of law and second it introduces a precedent in the XDG specs that they will adopt locationspecific or country specific policies. It would also potentially associate the freedesktop.org community with this law in the minds of people which is also not good. So the author just ended up closing their merge request and said that they will look at implementing this proposal somewhere else. Maybe in the flatp pack portals where I'm sure they will face the same exact oppositions because the same problems apply there too. Now apparently the system 76 CEO managed to reach out to Californian lawmakers and senators to try and add exceptions for open source to this law. So this could change. But that's just for California. There are plenty of other laws everywhere in the world that aim at doing the same thing. Sometimes even worse than what California proposed. And it's sort of a crappy situation because if you want to support people living where these laws exist, you do have to implement uh well compliance with this law and some APIs and some features. But also, if you do so, then you're just caving into these absolutely horrendous laws that no one in their right mind would really want to have on their systems cuz this just opens the door to future mass surveillance. We just don't want that. Now, Unity, makers of the game making engine of the same name, announced that they were expanding their support for Linux at GDC 2026. They said that they're bringing official Steam support into Unity, meaning developers have an alreadym made Steam Works integration, which should remove a bit of work from the developers. And this also means they will provide build targets for the Steam Deck and the Steam machine. They also apparently want to improve their Linux native runtime, so developers don't need to rely on Proton as much if they want a fully native Linux build. They did say they like Proton, but that they cannot control how it works, and so they want to offer other ways of making games that run on Linux as well. That's pretty good news for Linux. Unity made a bunch of big blunders in the past, but it still seems to be a very popular game engine for a lot of game developers, especially for indie titles. So, having it support our platform a lot more than what they do currently, or at least better than what they do currently with more options is a good sign. And this is also due to the growing popularity of our platform in the gaming sphere, which is again very good. Still at GDC, Valve talked a little bit about the Steam Machine and Steam Frame verified programs that they will launch alongside those new devices, which is basically the same thing as the Steam Deck verified status, but uh with fewer support tiers for these new devices. So for the Steam Machine, this is called Steamverified. If the game is already Steam Deck verified, then it will automatically be Steam machine verified, which is normal because the game just runs. If the game is Steam Deck playable, but the only problems with that is a sizing of elements on screen, then it will also be marked as verified for the Steam machine. If it is only playable on the deck due to other problems like requiring keyboard input, then it will be marked as Steam playable on the Steam machine. If the game is unsupported on the deck, then either it will trigger another test on the Steam machine, in the case the game didn't run on the Steam Deck for performance problems, but if uh the problem is linked to non-compatibility with Steam OS, like for example, the game requiring some anti-che, then it will be marked, of course, as unsupported on the Steam machine as well. So, this program will be filled immediately with ratings for most games that have already been tested on deck, which is nice. There's one caveat, though. For a game to be verified on the performance side on the Steam machine, it just has to reach 30fps at 1080p. So, that's going to let a lot of stuff pass, even though it runs pretty poorly. On the Steam frame, it will not be as easy. The Steam Frame verified program will only vouch for games that play standalone on the thing. They need to be fully playable with the new VR controllers. They must run at 90 fps for VR titles and at 30 fps at 720p for games that are played on a flat plane. So basically displaying a screen uh inside your VR headset but not really being VR games. Those are not huge requirements in terms of performance and I would assume playing a game at 720p 30fps with screens glued to your eyes basically is not going to be a very pleasant experience. All games will still need to be retested for the frame unless they're already unsupported on the Steam Deck because of performance or Steam OS support, in which case they will be marked as unsupported on the Steam frame as well, which seems to indicate that the Steam frame will not have better performance than the Steam Deck in standalone mode. Finally, they talked about how to bring games to the Steam frame. And there's either the Android path which will just run the Android version through leptton which is Proton but for Android apps based on Wroid or there's the emulation route which will of course emulate the x86 instructions into ARM instructions using proton and fe. So, what we know is the Steam Machine will have a very large catalog of playable titles at launch. Around 25 to 26,000 games if we include all the verified and most of the playable games and some of the unsupported games on deck that will be playable due to better performance. So, that's pretty neat. We also know that the Steam Frame will probably not really perform better than a Steam Deck by default, whether it is because the hardware is actually in the same range of capabilities or if it is a bit more powerful. But the translation uh just knocks a little bit of performance out of that. We don't really know. But this seems like it's going to be a relatively basic standalone experience. Don't expect to play like gigantic games in VR in standalone. You will have to stream the games from your PC to the Steam frame for those to run well. And finally, we know that there's that very solid leptton layer because if they use it as one of the two supported path for playing Android titles or Android apps on the Steam frame, it means that they're pretty confident that this thing works, which is good because if they make it available outside of Steam, outside of the Steam frame, this opens the door to a lot of Linux phones running a true Linux DRO with leptton to actually run the Android apps that you might want to run on top of that. that could be a gamecher for Linux powered phones and I will be looking forward to how you can implement that and run that cuz that looks really cool. In the meantime, if you want a solid Linux gaming computer, you have our sponsor Tuxedo Computers. So, Tuxedo are a Germanbased manufacturer of PCs that run with Linux out of the box. They ship to most countries in the world and they have a big range of computers from the more affordable small form factor laptops all the way up to workstations to giant gaming PCs to gaming laptops and plenty of options in between for all price points and all performance points. You also have plenty of agency on what you put inside in terms of the CPUs, the GPUs, uh the keyboard layout you want on your laptop, even the logo you want engraved on the lid or no logo at all if you're not a fan of branding. I only use their computers these days. Everything that you see or hear from me is done on one of their laptops. All my gaming is done on one of their desktops. They've been supporting the channel for a long while now. They're really fantastic and as usual, the link is down in the description. Anyway, this will conclude today's episode. you know how to support it. You have plenty of YouTube buttons and the comment section. You know why that's important for the algorithm. You also have plenty of links down in the description to the articles I used to make this show and to ways to support this show as well. Uh you have a daily version of this show as well in the Patreon pages and YouTube member pages. So do check that out if you want. And in the meantime, thank you all for watching and I guess you'll see me in the next one. Bye.
Video description
Head to https://squarespace.com/thelinuxexperiment to save 10% off your first purchase of a website or domain using code thelinuxexperiment Grab a brand new laptop or desktop running Linux: https://www.tuxedocomputers.com/en# 👏 SUPPORT THE CHANNEL: Get access to: - a Daily Linux News show - a weekly patroncast for more thoughts - your name in the credits YouTube: https://www.youtube.com/@thelinuxexp/join Patreon: https://www.patreon.com/thelinuxexperiment Or, you can donate whatever you want: https://paypal.me/thelinuxexp Liberapay: https://liberapay.com/TheLinuxExperiment/ 👕 GET TLE MERCH Support the channel AND get cool new gear: https://the-linux-experiment.creator-spring.com/ Timestamps 00:00 Intro 00:41 Sponsor: SquareSpace 01:57 AMD drivers, Kwin and Lutris now have AI-generated code 05:55 New EU led Android Integrity API 08:16 SUSE could be sold, yet again 09:53 LFS is now systemD only 11:33 AI translations add hallucinations to Wikipedia articles 13:45 Anthropic detects vulnerabilities in Firefox 15:52 Valve & RADV devs work on per-game driver optimization 16:57 Proposal for age verification closed 18:51 Unity 3D will better support Linux 19:58 Valve details their new Steam Verified program 23:57 Sponsor: Tuxedo Computers Links: AMD drivers, Kwin and Lutris now have AI-generated code https://www.gamingonlinux.com/2026/03/lutris-now-being-built-with-claude-ai-developer-decides-to-hide-it-after-backlash https://www.phoronix.com/news/AMD-More-HDR-KWin-Claude-Code New EU led Android Integrity API https://www.heise.de/en/news/Paying-without-Google-New-consortium-wants-to-remove-custom-ROM-hurdles-11204037.html SUSE could be sold, yet again https://www.reuters.com/business/eqt-eyes-potential-6-billion-sale-linux-pioneer-suse-sources-say-2026-03-09/ LFS is now systemD only https://lists.linuxfromscratch.org/sympa/arc/lfs-announce/2026-03/msg00000.html AI translations add hallucinations to Wikipedia articles https://www.404media.co/ai-translations-are-adding-hallucinations-to-wikipedia-articles/ Anthropic detects tons of vulnerabilities in Firefox https://www.techradar.com/pro/security/anthropic-says-it-found-a-heap-of-firefox-security-flaws-using-new-claude-tools-says-ai-is-making-it-possible-to-detect-severe-security-vulnerabilities-at-highly-accelerated-speeds Valve & RADV devs work on per-game driver optimization https://www.phoronix.com/news/Mesa-RADV-More-Per-Game-Tuning Proposal for age verification closed https://linuxiac.com/xdg-age-verification-interface-proposal-closed/ https://fosstodon.org/@carlrichell/116201429639953387 Unity 3D will better support Linux https://www.gamingonlinux.com/2026/03/unity-announce-expanded-supported-for-steam-linux-steam-deck-and-steam-machine/ Valve details their new Steam Verified program https://www.gamingonlinux.com/2026/03/valve-detail-steam-frame-and-steam-machine-verified-requirements-at-gdc-2026/ #linuxdesktop #linux #technews