bouncer
← Back

DJ Ware · 8.4K views · 590 likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the 'pure' translation of C to Rust used in these benchmarks may intentionally trigger Rust's safety overhead to make C appear faster in specific contexts, framing the choice as a direct trade-off between speed and safety.”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
98%

Signals

The transcript exhibits clear markers of human narration, including natural verbal fillers, spontaneous corrections, and a conversational flow that lacks the rhythmic perfection of synthetic voices. The content demonstrates deep technical nuance and personal perspective consistent with a human expert sharing knowledge.

Natural Speech Disfluencies Transcript contains natural stutters, filler words ('uh', 'yeah'), and self-corrections ('there's there's kind of an old rule', 'I've said this before, and my opinion doesn't matter, that's for sure').
Personal Anecdotes and Context The speaker references specific historical hardware (PDP11, Cray) and personal hardware (AMD Ryzen) with a conversational tone that reflects deep domain expertise.
Syntactic Irregularity Sentences vary significantly in length and structure, including run-on thoughts and informal phrasing ('Can't forget Algo') typical of unscripted or semi-scripted human speech.

Worth Noting

Positive elements

  • This video provides a rare look at how classic 1970s/80s benchmarks like Dhrystone and Whetstone behave when ported to modern memory-safe languages.

Be Aware

Cautionary elements

  • The use of the Tanenbaum-Torvalds debate serves as a 'consensus manufacturing' tool to imply that Redox OS is the next inevitable evolution of computing.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-08a App Version 0.1.0
Transcript

You know, in engineering, there's there's kind of an old rule that says your opinion doesn't matter. I've said this before, and my opinion doesn't matter, that's for sure. But what matters is what you can measure. What's been going on lately is've been a lot of talk about rust is better than C or C is better than rust. Rust is safer. C is dangerous. You've heard all this before. None of this matters unless you can actually generate the machine code from both languages and compare not only how fast it is, but the other thing you need is accuracy. How accurate it is. If the answer is wrong, it doesn't matter how fast it is. You know, this isn't a new problem. Back in the 1960s when Forran compilers were evolving in Watt 4, Watt 5, Forran 66, 77 there and then the new languages like PL1 and cobalt. Yeah. And then basic and Pascal on and on and on the languages went Algo. Can't forget Algo. Does the code run fast and does it produce the right answer? I mean, that's kind of the whole point, isn't it? There was a a British engineer Rory Longbottom. He solved the problem for everyone. Yeah, he did. He doesn't get much credit, though. He He didn't write synthetic benchmarks. Instead, he wrote benchmarks that pushed real hardware in real ways. And he found all ways that, you know, he could find valuable results from it. But one of the questions he was asked is how do I compare the results from two different compilers that are totally different languages. I mean the people were asking I want to move to this language from this language. How do I know it's worth my time and effort to do it? So yeah that was one of the things that he started working on. And so uh he p he what he did was he looked at the whole aspect of accuracy and being able to actually put that in a form where you would be able to extract it later. Check summing not just for the answer but also the algorithm that created it. In other words, did somebody go in and mess with the algorithm or is it the same algorithm that the benchmark was published in? I think the beauty of Royy's work is that it's timeless. I mean, the algorithms don't care. You can run a PDP11, a Cray, you name it, or even a Raspberry Pi, or a modern AMD Ryzen like I'm doing today. That's exactly what we're going to do today. We're going to compare a pure C program, that is, there's nothing embedded in it that's coming from Rust. It's pure C. And we're going to compare it to a pure Rust version of that same code. In other words, we're going to take and copy the C as best we can over to Rust. We're going to translate it over to Rust. We're going to have the same algorithms, the same data types if it's possible. We're going to we're going to use the same order of precedence and operations so that we don't make the mistake of Yeah, that's how you get inaccuracies is that you make assumptions on the order of precedence. There's no foreign functions in this. We're not calling C directly from Rust. So, it's not a FFI. It's not embedded in it. Uh yeah, basically that would be cheating. And no, we're not going to cheat. So, we're going to take four four four of Roy Long Bottom's programs, the classic uh benchmarks like dry stone, wet stone, membrane isn't one of his, but uh it is it is a modern one. The other one I'm working on is Livermore Loops. So, what does drystone really tell us? Well, it's dealing with pointers, but more importantly, it's dealing with strings. The year that that was written, strings weren't as important. Numbers were important. Today, it's just the opposite. It's the string that is more important. So, yeah, it has actually more meaning than today than it did back when it was written. Wetston is floating point. It's it's math and the ability to execute all kinds of different kinds of math. So, and to be able to do it accurately and do it with what we call compiler stability, which means we have if you ever floating point is is not precise. It's it there's a guesstimate an estimate that goes on between translating between binary and floating point. So, you get some rounding errors uh with it and that creates inaccuracies. Yeah. Also, mandler brought that one has check summing and a speed. So, it tells you how long it took to draw the mandler. You give it a size, it generates it, and then it tells you how long it took to do it. But it also tells you how accurate the drawing was. So, first let's take a look at Royy's uh tests and find out what they revealed. We we ran Rust and C side by side. And so in this benchmark section, we're going to look at uh we'll start with we'll start with dry stones first because this one exposes pointer behavior and that I think is going to be critical for rust because rust is trying to manage the safety of those pointers. So there's extra code that's going to run and before it allows the pointer to be accessed in this hardware. It uh we have a pure C version that we based all this off of. It completed its run in about 12 nanoseconds per operation delivering roughly 82 million dry stones per second. So what right that's just a number that's held there. But how did Russ do? Well Rust version we tried I'm just going to emphasize this. We try to do a line for line match with C and then using the same structure we came to about uh 94 nanoseconds per iteration or about 10 million dry stones per second. So you can see these are wildly different. C performed at about eight times the performance of rust. I think the only reason for that would be the thread the safety check on the pointers right to make sure we weren't jumping outside the program and all that. The winner of that one goes to C. So next we have wetstones and this is mathematical foundations of computing. These are floatingoint operations uh polomials, approximations, trigonometry functions, exponentials, algebraic stabilities of the compiler, all that stuff's tested. So again, pure C version was the standard we used and we tried to mimic that in Rust. And then we C delivered about 11,500 million whips or wet stones per second versus Rust was about 13,000. So Rust in this case performed better because it was dealing with actual objects and addresses instead of references. So yeah, the safety checks didn't have to come into play. And I think this also suggests that you know the LLVM model upon which Russ sits is actually better at floatingpoint code generation than GNU C is. So yeah, next we're going to move on to memory speed and find out what that's all about. that measures how fast each language can move data, reading, writing, uh, copying, double scaling, all that sort of stuff in a tight loop. So, this isn't about language. This is about memory. This is this is trying to force it so that it gets bigger and bigger and bigger until it pops out of L1 cache and then it gets bigger and bigger until it pops out of L2 cache and starts to hit main memory. That's where we want it and that's where we see the biggest drop in performance as we would expect. C delivered high throughput in the small arrays well over 200 gigabytes per second whereas Rust managed about 36 to 60 gigabytes per second depending upon there's a bunch of tests that are done in here. So depending upon which test it was it did better or poorer. Uh I think that has to do again with the the managing of memory to make sure that you're within bounds. Mandler this one is kind of interesting because it tests everything at once. Branching behavior integer math floating point uh and most importantly it produces a check sum on that that not only checks the answer but also the algorithm used to get to that answer. So if you go in and alter the code, you're going to alter the algorithm. So and that is where we ran into some problems. Uh so this probably needs to be revisited by me to see why we had such a differential in the in the check sum. So let's talk about the results first. C rendered on a 2,00x 20,000 pixel mandler set that was it took about 1.7 seconds. was extremely fast with a check sum of of 690 497 749 doesn't mean anything by itself but if you go to the mandler uh game uh page you'll find that that is pretty close to being in line with their calculated result that indicates that yeah we were kind of close to the code not exact but pretty close rust on the other hand was faster it was 1.59 9 seconds with a check sum of 59809. So a little higher and the check sum should be about the 400 level. So it was close but not no cigar. It wasn't identical. So it indicated there was differences in the algorithm somewhere. That run produced visually correct fractals as far as we could see, but obviously there was some slight differences in it enough to throw the check some off. So Rust and LLVM take stricter approaches in some cases and C compilers may fold or reorder expressions more aggressively. And that's exactly why a check sum based tests are valuable because you learn not only what's important but you learn what you're comparing against. So Rust is is not universally faster than C at least in my test but we don't have the definitive check here. So we did check pointer heavy test floatingoint workloads memorybound workloads numerical stability that was through mandler and I think these are these this is why Roy log bot longbottom's benchmarks are still valid today still relevant they because they care about the actual machine code that ends up running on silicon now all of this leads to kind of an interesting point Because while we've been comparing Rust and C, the context of L Linux is that there's there's an entire operating system out there of things that run that you may choose to run. So yeah, trying to pick the right things and the right architecture. You may find on yours Rust is faster than C because of the libraries that are deployed. But in this case, what we're looking at is a new distribution that is completely written in rust from the ground up. It borrows heavily from plan 9 and its basic design. And it also borrows from L4 which is a micro kernel architecture. And as you might guess, this is a micro kernel based system called Redux OS. It's been around for a probably I I I would say four years, maybe five, but uh yeah, it's it's been gradually building up. It's still in alpha. There's it doesn't have a full set of things yet, but we're going to look at it. Some of its strengths are with the with Rust is that it avoids many of the tradeoffs that show up inside of monolithic kernels like Linux. Plus, it's an operating system that doesn't have to carry baggage around with it from the past, whereas Linux does. There are devices in there that date back 30 years or more that are still there that someone might still be using. So here's the complete transition that uh in moving from this we this this whole debate between uh Rust and C reminds me of the debate between minix and Linux many years ago uh where Lionus had published his code. There was some initial versions of it that were floating it around that you could pick up and install with great pains but you could get it to install and run and then it was it was looked at by Andrew Tonbomb who was the author of Minix. Minix is a micro kernel system very Unix like u it's used in it had an academic flavor and academic design. and tried to teach good principles in operating system how to build a proper one. And so he kind of took Lionus to task in the Usenet forums uh saying that you know basically Linux was not good. It it's it's not it's not living up to the what a modern operating system should be. Well, Linus got mad. Yeah, he got mad. And so yeah, they were exchanging blows and yeah, it got ugly for a while. However, for all most of us, it was kind of entertaining to watch it, but you know, it just I'm not going to relive all that. But you know, the the the things that were true back then of Linux are no longer true of Linux. Uh some of the criticisms that Andrew laid up on it is that it didn't support enough platforms. Well, Linux was getting started. they they didn't have a lot of people that were working on, you know, the all the different u ARMbased machines or yeah, any of any of the Fujitsu or any of those kind of processors or even the IBM uh Power PC platforms. There wasn't a lot of people working on stuff like that. They had Intel and that was it, the x86, but even then it was a real thin Yeah. you had a specific architecture that you had to stay in if you wanted it to run. Yeah. Certain cards, certain certain drivers uh were it had very little driver support actually. So yeah, and and Linux was monolithic and tonab just ripped it apart. But those you know the the things that were true then aren't true now. Linux has grown into a very large platform supporting all kinds of workloads from uh you know edge and IoT devices all the way up to the largest supercomputers. So yeah, I mean I think we all know that what Lionus was looking at wasn't an academically correct operating system. What he was looking for was something practical that people could use. Imagine that. Yeah. We weren't trying to pass a whole bunch of academic tests. We were just trying to see if we'd get this thing to work and people would want it. The debate was pretty heated and uh yeah, I think history chose Linux for one simple reason. Performance and practicality usually win out over things that are theory elegant and not and and not necessarily the best design, the best purity design. It can it can look like a mut and people will use it, you know, that's just the way it is. But Linux ran fast and it started to run everywhere today. It's ubiquitous today. So yeah, so the things that he that Andrew criticized Lionus's Linux for are really true anymore. So yeah, because the world has changed. As we wrap this up today, I think I think we need to remember that none of these benchmarks exist in a vacuum. They were all developed for specific purposes. Dry stone, wet stone, memsspeed. There was even Lindac. It was a a small light version that was meant for running on DOSs basically. There was Manelbrra uh that was crafted as well. And there was people like Roy Logbottom that actually Longbottom that actually made this happen. He gave his time. He gave his skill. And no, nobody thanked him. Nobody, he didn't earn any prizes for this. He didn't get much recognition for it. But people in engineering would, you know, they we all knew who he was and we knew what he did. Even in the 70s and 80s, dry stones, wet stones, man, people were quoting what those did on every machine they could find uh the data for. So that somebody had done the benchmarks for. But Roy didn't stop. He kept refining his work. He kept finding new ways to improve it and adding on new new benchmarks. If you go to his site, he's these all belong to his legacy, but he has new benchmarks that test newer things. And that's I just started here because these are ones I'm familiar with. And then he kept he kept that website going. I'll put his link below, too. We need more people like Roy because people without those those people. Yeah, we would never have been able to answer the question if I'm comparing two different uh compilers from two different vendors. That's one thing. But what if they're completely different languages? How do you sync them up? And that was the s that was the chickson. So that brings us to the last benchmark I don't have ready yet and that's the liver more loops. This is a stress test. This exposes everything in the compiler that it does. It'll show what it gets right. It'll show what it gets wrong. So yeah um it's built for parallelism and that's exactly what we need because that's what our systems are built for. They have multiple processors. they are able to do uh parallel processing. So even though our operating systems are not capable of doing that uh they they simulate it but you can uh you can use this and gain truth from it l without going into too much detail let me just say it this way it's built up of kernel tests there's 24 of them currently in there that beat on different parts of the compiler to expose weaknesses uh and they there's three calibration runs that it makes uh with those 24 in order to I don't know if it's to heat up the processors or maybe just to try to get it to close on alignment so that we don't have as much variation in floating point arithmetic as we go through it. But anyway, uh we're gonna test that on on both C and Rust and we're gonna find out what it does. I don't know what it'll do. We'll find out. I hope is that this this will be simple even though the application itself is a whopper. It's big. So, um so it'll settle. Hopefully, the argument is see or rust better. We have a partial answer but not a complete one. So, I'm not saying which is better until that one is working. I hope you enjoyed this video and this look back a little bit on the some of the problems that that were solved and that you know whenever you're looking at new languages, drag some of these old ones out. They're designed for that. They're designed to look at that sort of thing. Anyway, I hope to see you in the next video and uh please like and subscribe and bye for now.

Video description

How do you actually benchmark two different programming languages? Not opinions. Not hype. Not what’s trending on social media. Real data. Today we take one of the most requested comparisons in software engineering — C vs Rust — and test them the only way that matters: by measuring the executables they produce, not the compilers themselves. To do this right, we pulled out a set of legendary benchmarks developed and maintained for decades by engineer Roy Longbottom, whose work shaped how entire generations of programmers evaluated their systems. Roy’s suite allows us to test speed, accuracy, data integrity, pointer behavior, math stability, and memory throughput — everything a real language comparison needs. Benchmarks Included • Dhrystone — pointer-heavy, string-heavy, logic-heavy • Whetstone — floating-point and math operations • MemSpeed — memory throughput, scaling, and stalls • Mandelbrot — algorithmic complexity and numerical accuracy • Livermore Loops (coming soon) — the ultimate compiler stress test This isn’t a “Rust good, C bad” conversation. It’s not a “C forever, Rust is slow” argument. It’s an engineering evaluation of actual performance and correctness. Why This Matters Rust is the new darling of systems programming. C is the old warhorse that still powers the world. Everyone has an opinion — but almost nobody brings data. And when you measure correctly… data wins. Stick around to see what really happens when C and Rust go head-to-head in a fair fight. And yes, at the end, we talk about the Rust-based OS you haven’t tried yet—Redox OS. Roy Longbotom Website: http://www.roylongbottom.org.uk/ Mandelbrot Game: https://benchmarksgame-team.pages.debian.net/benchmarksgame/performance/mandelbrot.html Table of Contents 00:00 - Initial 00:47 - Different Program Language Testing 02:48 - C and Rust Test Methdology 04:07 - Dhrystone 04:30 - Whettones 05:07 - Mandelbrot 05:38 - Dhrystones Test 06:58 - Whetstone Test 08:12 - memspeed Test 09:18 - Mandelbrot Test 11:27 - What else is Needed? 12:01 - So What Happens if We match OS to Rust? 12:37 - Redox OS, a Rust OS 17:36 - Final Thoughts 19:32 - Livermoor Loops

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC