bouncer
← Back

Craft Computing · 5.3K views · 163 likes

Analysis Summary

20% Minimal Influence
mildmoderatesevere

“Be aware that the host's 'responsible journalism' framing and retraction, while likely sincere, also serve to build high levels of parasocial trust which makes subsequent product recommendations (like eBay affiliate links) more persuasive.”

Transparency Transparent
Human Detected
100%

Signals

The transcript displays highly natural, unscripted human interaction including filler words, specific personal memories, and real-time engagement with a live audience. There are no signs of synthetic pacing or formulaic AI scripting.

Speech Patterns Frequent use of natural filler words ('uh', 'um'), self-corrections, and conversational interruptions ('Nailed it', 'I need to get you a glass').
Personal Anecdotes Specific, non-generic stories about selling P4 cards on eBay, laser cutting glasses in a garage, and visiting specific breweries in Vegas.
Live Interaction Real-time response to YouTube chat comments (Atomic Duck IPA) and spontaneous banter between hosts.
Production Context Long-form podcast format (over 2 hours) with established hosts and a physical mailing address provided.

Worth Noting

Positive elements

  • This video provides highly specific technical insights into ZFS storage and legacy GPU repurposing that is valuable for homelab enthusiasts.

Be Aware

Cautionary elements

  • The host's high level of transparency regarding mistakes (the GPD retraction) can lead to a 'halo effect' where viewers may lower their critical guard regarding his commercial affiliate recommendations.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 13, 2026 at 16:07 UTC Model google/gemini-3-flash-preview-20251217
Transcript

Uh, it's thinking. We got the little spinning wheel. Oh, you're live. And I got a picture. There we are. >> Hello everyone. >> Welcome to Talking Heads episode 419, your once weekly live show for the latest in beer and tech news. I'm Jeff and uh I'm joined by a special guest tonight. You want to introduce yourself, my good friend? >> Hey, thanks for having me on the show, Jeff. Uh my name is Chris. Some of you might know me as Honeybadger from the Trunaz forums. Uh, despite the meme, the Honeybacher actually does care. He cares about your data and about the safety of it. So, by all means, that's why I'm here. Uh, Jeff said, "Hey, come on. Let's bust some ZFS myths. Tell us what's going on with True Naz. What are you guys doing in, you know, in the past, in the future? What's your plans?" Um, so yeah, glad to be here. Uh, I've been a big fan for a while. Uh, you're responsible for both of the P4s that exist in my cloud gaming server. Uh, I got them thankfully right when the video came out before the prices spiked. So, >> excellent. Yeah, >> I I am responsible for so many P4 sales on eBay. Uh what's nice is I do have an affiliate link to eBay and uh I I think at last count I had sold something like four or 500 P4s uh over over the lifetime of those couple of videos that I did on them. Um >> that is impressive. >> So yeah, >> they are they are great little cards. It's basically like a 1080 that went through the wash on high heat and yep, you know, >> they come out pretty well. little little shrinkage, but you know, >> yep, >> they they still work great. [gasps] >> Uh anyway, welcome to Talking Heads. Uh thank you all so much for joining us on this Wednesday night or in podcast form over on Spotify or wherever your favorite podcasts are found. Uh we do drink alcohol in the show and if you're drinking along with us, alcoholic or not, let us know in the chat and we'll uh see if we can give some early show shoutouts before we uh we get the show on the road. Uh, if you like free content on YouTube and you want to help support the channel, there's a couple ways you can do that. Head on over to patreon.com/craftcomputing, give me a follow. And the bonus to that is you get exclusive access to my Discord server. Uh, you can chat with myself as well as all the other hosts from Talking Heads. Heck, even Honeybadger is on over there. Um, and that is always a fantastic time. Lots of people hanging out through the week. Uh, there's about,00 active users on there and, uh, it's fantastic. The reason I ask you to pay is it keeps the trolls on the other side of the moat. So it's it's a community that is actually dedicated to be there and wants to be there and wants to build a big a good community, not just, you know, hurl insults at each other. It it's awesome. I I recommend it. You can also go to craftcomputing.store. Uh we've got pint glasses, whiskey stones, uh bottle openers, all of it designed and made right here in house. I have laser cutters in my garage. If you place an order, I go and laser cut the glass and then drop it in a box and ship it to you. That's how it works. It's a glorified Etsy store, but it is really, really high quality merch. So, craftcomputing.store. Uh, I need to get you a glass. >> Yeah, I was going to say it looks like somebody looks like we got our first beer comment in the chat. He says he's drinking atomic duck IPA. >> Oh, excellent. Excellent. >> Yeah. Um, uh, is that, uh, gosh, what's the brewery down in, uh, Vegas? Uh, is that Avery Avery Brewing? I think that might be the Atomic Duck. Um, hold on. Oh, no, that Avery's Boulder. Where's Atomic Duck? I want to say it's out of Vegas. Yeah, Abel Baker. There we go. Nailed it. [laughter] I knew I had I'd seen that one. I've been to that brewery and I've had Atomic Duck at the brewery and it is fantastic. So, excellent choice. >> I I see the comment there called out from green protagonist. I've been talking with you in Discord a lot. Uh I'm envious of that, you know, giant ducted blower you got going on for your P4s in your setup. Uh very impressive. Unfortunately, my Bazite testing is not going as well as I'd hoped. Um, forever whatever reason it is, Bazite doesn't seem to like the P4s as well as some of the other cards. So, I might just try like a stock of Buntu or something like that. Not a not a canned dro. >> Yeah. Uh, Pop OS also has, I guess, pretty amazing support for it as well. So, if you don't want Stockabuntu, Pop OS is another great option for that. >> Cool. >> Um, let's see. Uh, Michael's got a Cigar City Brewing, uh, Madro Brown Ale, 5.5%. That sounds awesome. Uh, let's see. Uh, uh, Green is also having 10 ounces of coffee, raw sugar, non-dairy, French vanilla. That's totally allowed as well. Uh, as for me, uh, I've got a couple of choices tonight. Uh, I think I'm going to start H. Tough call. I'm going to go with uh Alvarado Street Brewing. Tayberry Cream Imperial Sour with Tayberry, Blackberry, Raspberry, and Vanilla. So, something a little out of left field tonight, but that one was just kind of calling my name. uh 9% by the way. >> So nice. And it's it's a true sour ale. It's not like a jammy sour like we've been uh seeing very popular lately. So it's not like trying to eat a berry cobbler in beer form. [snorts] We got a question here uh from James Steinbreaker. He says, "What is this P4 we keep talking about? That's obviously not a Pentium 4, Jeff. What are we talking about? What's uh >> that is the Nvidia Tesla P4. And if you give me about hold on 6 seconds here. 1 10 100 2 100 3 100 61 10000 and I'm back. That right there is a Tesla P4. It is a half height, halflength server GPU. Uh blower style GPU. So you uh it needs chassis air flow uh inside of a server. Uh but this is a GP102 GPU die. Uh it is or 104, excuse me. Uh it is the same exact GPU die in the 1080 and the 1070 Ti. Um it is uh only a 70watt TDP card, so it's a little bit lower power, but they can still do 1500 MHz pretty easily on boost clocks uh with just a little bit of tuning. Uh, so you can get 85% of the way to the performance of a 1080 in a card that is this big. Um, and if you put two or three of them into a server, uh, you can get some pretty awesome gaming performance out out of one of these cards, uh, and stream it to a handheld. Or let's say you want a little bit more power out of your Steam Deck. You can run a VM on a server with a P4 on it, stream it to your to your Steam Deck, and quadruple your performance. Um, so they are pretty killer little cards and actually they've gone way down in price again. Uh, I think they are back down into like the 70 or $75 range. Fantastic value. >> I was uh just looking at them. I think it's set I got 78.50 with a uh PWM blower and a 3D printed clap on the back there for it. That's a pretty good deal. >> Yeah. the the reason they're dropping down now is because Nvidia is of course shedding support for anything older than Touring in the 590 drivers. Yep. So, >> yep, yep, yep. But yeah, they're a lot of fun to play with. Uh if you do things with Proxmox, VMs, any kind of thing like that. Uh also fantastic transcoding cards. If you uh if you have like a Plex server or a Jellyfin or something like that, uh they they do I believe >> something like 20 uh H.264 x265 streams uh because they have uh dual encoders on them and Nvidia unlocked Oh, no, it's 16 because it's uh it's eight per uh eight per uh encoder. So, you could do 16 uh 1080 or 4K streams off of this one little card. >> Yeah, they're they're pretty good for that. >> Yep. Yeah, >> good times. >> Speaking of handhelds, you want to take that segue there, Jeff? Handheld news. >> Absolutely. So, uh, last week on the channel, uh, we covered a little bit of a kurfuffle between Basite and GPD, uh, one of the original manufacturers of, uh, handheld gaming devices. Um, and, uh, I I need to do a bit of a retraction. Uh, as it turns out, I didn't have in the entire story yet. Although at the same time, neither did Basite or GPD. Uh, but I I jumped the gun and I I flat out blamed GPD for this confusion. Um, GPD at the end of December made a statement publicly that said, "We will be officially supporting Basite on our handhelds moving forward, including starting to ship Basite on our handhelds." Also, Basite will be supporting GPD devices uh moving forward as well. GPD has shipped Basite sample devices of all of our upcoming and previous devices that we would like them to support and they are fully on board. To which Basite said, "What?" Uh >> they said, "That's news to us." >> Yeah. uh and uh GPD was directing uh support traffic to a uh to their own Discord server for any any Basite related support. Um and GPD's done similar things before uh that like they jumped the gun on Steam OS announcements and and were directing people to Hollow ISO when Hollow ISO wasn't aware of it. And I thought this was exactly the same behavior. Uh it turns out uh in a statement from uh Bazite themselves um it turns out that a former team member from Bazite was in communication with GPD using his personal email account and likely received GPD devices uh without authorization and without the knowledge of of the Basite team. Uh so basically taking GPD for a ride uh on potentially getting free devices or promising thing things that he obviously couldn't deliver as a former team member of Bazite. Um so the story is way more interesting now. Uh and and definitely not the fault of of GPD in any way, shape, or form. They it like I said, it sounds like they they were taken for a ride in this whole situation. Uh, and as I'm an upstanding journalist and and I'm not afraid to be wrong or even correct myself when I am wrong, I would like to issue an apology to GPD for jumping the gun on this story. Uh, there was obviously more to it and uh, there it is. >> What is this responsible journalism on the internet thing, Jeff? This is completely new. >> I know, right now. If I was truly a responsible journalist, I would have reached out to GBD for comment beforehand. Although on the podcast it's more of a back and forth kind of show. So it it's just hey this was in the news. Let's talk about it. Uh it's not really like formal reporting. Uh if I would make a dedicate dedicated video on this situation, absolutely I should have reached out to GPD and things like that. Podcast's a little bit different format in my opinion. Uh but uh I'm happy to say that I was wrong. >> We're kind of the comment section for the news feed here. >> Exactly. Exactly. we're we're seeing the news come in the same as you and we're just kind of reacting honestly to it. Uh but like I said, I was wrong and and I'm I'm happy to know that I was wrong. Uh that this wasn't GPD with a >> you know, already a situation they probably making another oops, you know. Uh >> yeah. So yeah. Yeah. Uh honesty on my news podcast. How dare you? [laughter] That's not allowed. >> Yeah. >> This is This is the internet. Everybody should be upset over everything and never apologize. >> That's how it works, right? >> Exactly. Yeah. >> Yeah. >> I mean, I should have doubled down. I should have said, "Well, GPD might have been taken for a ride, but they should have known better." Like, you know, I'm supposed to be angry guy up here, right? [snorts] >> Exactly. No, like I said, I'm I'm I'm happy to know it was uh something as simple as uh an innocuous mistake. Well, I mean, it's it sounds a little bit more sinister when you say, "Hey, there's a former employee using his personal email to to get devices from them." like, okay, well, there's there's probably some layers to this and we'll want to see how this all unfolds and yeah, >> wait for comments from both sides before we go in there. >> Yeah. Um, yeah, Bazite has said they will not be making any further statements on the situation. Uh, they were a former employee and as far as I understand will probably remain a former employee. Um, but, uh, they said he's not our employee anymore, so it's none of our our business anymore. But that's the situation and we're we're happy to have cleared it up. So >> that's good. >> Yeah. >> Excellent. >> Skull says Hedenismbbot. I apologize for nothing. [snorts] >> Of course not. >> Well, it's uh that's good news, I guess, to to hear that they're, you know, figuring it out. Can I be a former employee? [laughter] >> Former employee of what? I mean, it's it's pretty easy to become one, but it's harder to come back from that. >> Yeah. >> Kind of a oneway. Yeah, >> street for a lot of things. >> Yeah. Uh, if anything, I wouldn't be using Bazite for a reference if I was him anymore. [laughter] >> No, probably a bad idea. >> Looking for a future employer. >> Yeah, [snorts and sighs] I see a couple a lot of people are going back and forth again. We've uh I think we got another P4 convert in the making in the chat here for uh for James. >> Excellent. Yeah, Jo join us, James. It's uh >> they're a lot of fun. Um, as Green says, if you're looking for a just a pure transcode card, uh, something that that you just stick in a Plex server, right now the winner out there is the, uh, the Intel A310, specifically the Sparkle, uh, Eco Eco, >> which exactly uh, which is their uh, card in the same form factor as this, but it does have a fan on board. Um, that card is 35 watts and supports AB1 encoding. >> Yeah. Now, there's a couple important notes. Is that that Sparkle Eco was not always the perfect choice for two reasons. >> One of which was in the initial manufacturing runs, the shroud prevented you from getting to the screw to remove the full height bracket and swap to a low height one. >> Yep. >> So, that was problem number one. Problem number two was they had issues with fan ramp in the initial revisions of the firmware. That was fixed with newer cards. Uh you can flash that newer firmware in there. I believe it goes with the Intel driver like the arc package that you download will actually flash the new driver there. Yeah. Um if you're running it in Linux, people have devised a way to you know you can either create a VMU your card to it and run [clears throat] your patch in Windows or you can basically extract that firmware file from your Windows installer >> and push it to the card in Linux. So >> but great cards now. >> Yeah. Um yeah, that that's what I'm running for all my transcode needs. Um uh if you if anyone's been following along with my microcloud project um I've got uh two of my my nodes in that system uh running A310s each with a Plex server on it. So if one of the nodes goes down, Plex stays up because that's kind of important [laughter] with my house and and family and friends. Um so yeah, we are running redundant Plex servers in my house. Um >> redundant array of Plex. Yeah, that's pretty neat. Um, uh, and that all works great. Uh, but, uh, I do have one of the early run A310s and one of the later run ones where they drilled a hole and then put like a little plastic plug in the shroud so you can get to that screw. >> Um, I didn't I didn't know about that at first and uh uh, so I took apart my A3 or I'm trying to take the bracket off and I'm like, why the hell is this not Oh, there's another screw. Why is it under the shroud? And so I had to disassemble the entire card. Uh, which means taking the fan off, new thermal paste, new pads, like it's a whole deal to to get that one disassembled. Uh, luckily the second one that I bought has the little plug on it, so nice and easy. >> Yeah. All right. I'm I'm seeing a few questions. I'm seeing somebody popping in here. I think I've seen you uh in our YouTube comments before they're reing on um talking about the recycle bin for SMD as well as the Unix permissions. Um like Unix permissions, we're we're not forcing ACL. you can use your traditional, you know, octal Unix permissions on a data set. You just have to set it that way. Um, you can use those. So, we haven't taken it away. The reason that the the ACL's, the NFS, V4 style ales are becoming more common is because we we want to that's that's necessary for like mix mode SMB NFS shares. Um, it's it tends to be a little bit more granular than the traditional Unix octals, but you can still use them if you want. Recycle bin is a little bit more complicated. That one actually did get pulled out um by one of our Samba developers basically because it was it had odd interactions with snapshots. There were permissions issues. You know, basically somebody could put a file into the recycle bin and because the recycle bin was common across a share, not at a permission level. Um like everything's going into this like recycle effectively and somebody else is going in there and fishing it out. So it it was potentially like exposing things like that. um really bad edge cases and again part of it you know a lot of people say oh true is for home well it's you know we we develop for enterprises too they get really mad when you have things that can do end runs around auditing um and they basically scrub those out they're like hey that that can't exist we looked at that and we're like hey we agree so I I know it's a bit of a pain to not have a recycle bin in there and if somebody like accidentally deletes a file you got to go into a snapshot to pick it up but hey snapshots are plentiful cheap and really easy to do and as long as you don't stack like a thousand of them on top of it, you really don't have a significant performance impact. >> Yeah. [snorts] >> Yeah. Uh recycle bin like I think exactly like you said is is nice but I think if you're running ZFS you should be enabling snapshots anyway even if it's just you know two or three of them so you can go back you know two three days or whatever you you set your policy to because that takes up almost nothing as far as space. It takes almost nothing as far as resources. You get all the benefits of a recycle bin without having to manage a separate recycle bin. >> Yeah, you can um let me say prefer the snapshot or recycle bins to the snapshots. >> Uh snap snapshots are only overkill until you really need them. And the benefit of a snapshot is it's immutable in that sense. Like >> you you can try to say, "Oh, I throw something in the recycle bin and you delete it." Well, now it's gone for good. If you have a snapshot, you can delete it off that main one, but then you go, "Well, I can pull it back out of the snapshot. There's no way I can destroy that unless my snapshot expires, >> right?" >> Um, what we'll often do for a lot of our enterprise customers is we have this like rolling chain of snaps where they're taking snaps every 15 minutes for an hour, every hour for 6 hours, every six hours for a day, every day for a week, and every week for a month. It cascades down like that. So the further you go from, you know, epoch time, the, you know, the less granularity you have, but very quickly you can pull back every, you know, 15 minute or whatever granularity. If you go back an hour, it's okay, you're pulling it every hour. Maybe you're pulling the copy they worked on this morning, >> but it keeps your total snapshot count down, means you're wrapping up and consolidating them fairly quickly. But that kind of stuff is really invaluable. And that's one of the major reasons that you know ZFS is so powerful is because you have all that snapshotting right there. >> Y know uh uh whenever I would deploy uh ZFS systems over to to my old enterprise clients uh you I think you summed it up perfectly. You do every hour for a week, every week for a month and every and we did every month for a year. Um and uh so that means at any point in time within the last 12 months I can go back and pull up a specific version of a file. if someone deletes it, moves it, loses it, whatever. Um, and and that always worked very very well for us. Uh, one way we kept it from getting a little bit out of hand was we also set our snapshots to only capture during the business day because they're not the businesses uh that I was working with aren't running 24 hours a day. They're businesses that are operating 8 to 5. And so you set your snapshot policy from 7:00 a.m. to 6:00 p.m. Cool. We just cut out 13 snapshots that we don't need to take during the day. Exactly. Even if they were going to be primarily empty, it's still the metadata, the overhead of having to block that tree there. >> Exactly. >> Um, still good on this conversation about, you know, what about when I delete a bunch of stuff, can you recover the space when you're using snapshots? >> Well, no, not until that snapshot expires, right? >> Because the idea is you say, hey, if you're taking a snap every hour and you expire it after a day, you don't get that full space back until that initial snapshot where the deletion happens, uh, expires. So, hey, I mean, you know, don't and and don't worry about not fully understanding. I mean, ZFS is complicated. It's uh it's one of those things where, you know, oftentimes you're going to hear it probably many times on here where the the answer to something is yes, but or [laughter] yes, but it'll be like yes, you can do that, but I think I said it last week on the my own show about juggling chainsaws. It's a great I you know, it's not a great idea. You can do it. I can do it. I'm probably gonna lose a limb and it's gonna end horribly, but I could probably do it for about three to five seconds. >> Yep. >> That's about [laughter] it. >> Yep. >> Uh oh, we got a big question here and a donation uh from not a professional. Sounds like he's a professional. Says, "I've heard if I don't use ECC RAM in my Trunaz build, I will be tormented by swarms of wasps, afflicted with boils, disowned by my parents and dog. Also, my data will all evaporate. Please confirm slashdeny." um deny. There are people who run without ECC RAM all the time. >> I am not one of them personally. Uh all my True Nest systems run ECC memory. Uh I've even had, you know, desktops that have run ECC memory. I'm not using one right now, but >> all all of my systems run ECC. And it's not for me so much about data safety as the fact that if a stick of RAM starts to go bad, it is going to sell itself out in a heartbeat. And I'm not going to worry about, hey, it's is is it maybe my RAM? Is there something that's like having an issue uh elsewhere? I just go, "No, I saw a machine check error show up in the log and it pinpointed the exact stick of RAM that's failing." So, I know that I get to kick that one out. When I go to RMA it, I send a machine check error log back to the vendor of that RAM and they typically don't argue with those. They say, "Okay, yeah, this guy's bad. You're getting a new one." >> Yep. >> Um, yeah. But we have several people even in you know even in the engineering department at trunaz they are running nonecc and believe it or not some of them are running real tech nicks on their trunaz machines. So uh what used to be complete heresy under BSD is uh not quite so heretical anymore under Linux. >> Yeah. Um I I have gone back and forth with uh with ECC on my personal machines as far as truness goes. I've I've ran with I've ran without. Um, we'll get into this conversation a little later with some other features with TRNAS about like what is the recommended or minimum specs or whatever else. But then the minimum specs are often written with enterprise environments in mind with production environments in mind. As a home lab user, you have different requirements. You have different uh different needs and different uh criteria for what's acceptable. Uh on the enterprise side of things, the reason they say ECC memory is required is if you're doing a a if you you're holding a massive database, a bit flip can be the difference between your data being secure and not secure or being kept or not accurate or or whatever the case may be. At home, if I'm storing 7 terabytes of Plex media, it's fine if I flip a bit. It's not going to matter. uh and and so it is what is your expectation of the system, what is your expectation for the data integrity and what is your fault allowance within that system. [snorts] >> Exactly. You you drop a frame on your Plex server or you get one corrupted pixel there. >> Not a big deal. If you remember what I said about the audit trail and enterprise customers not liking that, they they don't like, hey, this this got corrupted. [clears throat] >> Bad idea. Yeah. Um, yeah. Comment here. He says, "I'd rather have backups before ECC." Why not both? [laughter] Why not both? >> Why not both? Have them both, you know? >> Yep. >> It's pretty nice. >> Uh, yeah. We got some people here advocating say, "I don't use ECC and it works great. No issues for the past two years." >> Um, that that's one thing I want to call out here. Um, people say, you know, oh, there was the the scrub a death rumor here where ECC would actively corrupt the data on your disc. M >> um ju just not possible like the amount of mathematics that would have to get wrong there is probably not going to happen before the heat death of the universe. So >> that's one of my favorite phrases to to explain uh the likelihood of of an event. >> Ma mathematically impossible is a pretty good one. I I will sooner win a you know eight figure lottery than somebody's pool would destroy itself. >> No >> for that. Yeah. got non ECC. Yeah. Okay. And and I see uh Vince there from Circle Rewind advocating for BSD as I expected him to come in here and do. [laughter] Um that that's great that real real [snorts] tech works there. Um send ice scuzzy traffic to it over a FreeBSD system with the uh a default inbox driver and let me know how the memory corruption works. uh because that's one of the reasons that we advocated against it there and we actually pulled the driver out of tras. I think in version 120 update 3 I think we actually dropped the driver out by default and we said hey uh if you want this back on >> you can use it as long as you're not doing ICE scuzzi use SMB use NFS knock yourself out um use ICE scuzzi it's basically gonna >> it's going to blow up on you and we don't want that. Um, have you had any experience yet with uh Realtech's new affordable 10 gig driver? Um, have you seen that chip come out? >> I have not. That's the Was that is that the 81? Was it 8169? >> Yeah, that one. >> Something like that. Yeah. >> Yeah. Um, I've seen it. Believe us, we've seen it internally as soon as it showed up, I said we are going to have people wanting to run this thing. >> Yes, you are. >> Um, y >> one of the challenges with it is because of the cadence of our updates and cycles is as soon as new hardware drops, it's like, hey, you're you're probably in for a minimum of a six-month wait. Yeah. before it's going to land in trunaz and that was the challenge when you know the arc cards came out when blackwell landed um even with just the revisions we we are building the basis of this is on like a LTS Linux kernel with a you know a long-term enterprise supported storage appliance >> we we can't necessarily throw everything in there now there is a great little command that you could put into trunaz uh that will let let you break pretty much anything you want it is uh install dev tools um that will give you like full GCC tool chain and everything to absolutely let you bend, twist, maul, and mutilate Trunaz into whatever you want. But at that point, the warranty is if you break it, you get to keep the pieces. Um they they will pretty quickly close any bug ticket you submit from a system that has dev mode on. Uh the first thing they will say is, "Can you repro this on a system that is not have dev mode enabled?" >> Yeah. Can can you give us one you didn't f with first? >> Pretty much. If you want to do that, by all means, but the more you f around, the more you find out. And uh you can find out pretty quickly when you start putting random K mods in there. >> Yeah. >> Uh Green sends over another $5. Thank you very much, Green. Uh people are running uh to Unrade and Zema OS for ease and app support. Are there friendly solutions for online connectivity, uh PTOP, file sharing, transfer, etc.? Um, I don't know if you've you've messed much with uh with Zema OS. Um, but uh they're trying to make like a um who's the other one that's that's doing this? Basically like a virtual desktop style interface where you get a desktop and then it's like file browser or or window browser kind of setup for the NAS, but all of their uh their plugins are application based. And so if you want to install Plex, you double click on Plex and you click install and and it runs. Um but in the background is just a Debian based system and and butterfs and and a couple of other things. Uh so it's a really nice shell on top of everything. Uh not that trass isn't a nice shell, but it's you know can be a little intimidating at times. >> Yeah. And I'll give you that is that it is a little bit more challenging. Um somebody says what about Ug? And I guess Ugine would be another one in that space >> and and that's kind of the it's kind of the enterprise origin to trunaz showing there in that it's designed to be that a little bit more of an enterprise businessy kind of focus system. Um, you know, the apps came later with the advent of Linux and scale and some of the early, you know, friction there when we went, you know, Docker, Kubernetes, okay, Kubernetes, okay, never mind, Docker. Um, [laughter] >> you know, just and and it's gotten a lot better with Docker as far as the the friendliness of everything going on here. Um, you know, works works pretty well. Uh, you know, the I guess the initial challenge people said, you know, was, "Hey, why did you do that whole app shift?" And it was a real pain you know and I I liked you know what why did you start on Kubernetes if you were going to docker because initially the question was do we use kubernetes or docker and the choice was kubernetes >> now what happened there was initially trun scale at the time was intended to scale out it was going to have zfs on top of cluster fs so you would be able to glom multiple truness systems together have a shared storage plane and then obviously you have kubernetes on top of that for a shared compute plane you're going to a nice hypercon converge cluster all built on top of open source. >> Yeah. >> Well, and then Gluster FS was deprecated [laughter] and [snorts] you know we we weren't really able to take that on ourselves to say hey we're going to maintain another entire file system. We already do a lot of maintenance. We have a couple guys with commit bits for ZFS and they do a hell of a lot of heavy lifting there. >> So we said unfortunately we we can't sustain that. So once the Gluster FS pieces were dropped, we kind of looked at Kubernetes and going, this is a heck of a lot of layers and complexity to just run as a single system. >> Yeah. >> So then we said, well, let's let's switch to Docker. And we we did uh you know, I think we did a pretty good job writing up a tool chain in there to port everybody's Kubernetes apps over to Docker in that migration path. Now that we're here, it works. It works pretty well. And yeah, yeah, F's in the chat for Gluster. Um I'm really sad about that. I was I was hoping that we'd be able to make like a real like scale out kind of, you know, Dell iselon >> competitor out of that would have been uh would have been pretty nice. >> Yeah. Uh, no, especially with with ex with uh with all of you know the the true HA systems that you guys do and and everything else that would have been just a killer essentially plug-and-play package uh on the enterprise side of things where a lot of other high, you know, high availability type systems are are very much DIY depending on who who the customer is that's buying them. >> So, I I was excited when you guys were were talking about the the early scale days. >> Yeah. V Vince giving me some more flack. Yes. apps and jails and plugins existed on core and FreeBSD too, but they were there is definitely a much smaller audience that was using jails, plugins, apps, you know, the equivalent of those on BSD versus the the Docker containers, things like that. >> Um, uh, Homeland Hazards sends over $2. Thank you very much, sir. Uh, he's, uh, one of my other, uh, plugandplay hosts, uh, here on on the the podcast. Um, that's, uh, Matt. Uh, speaking of BSD, are you scale or core at home? Uh, of course I'm scale. Uh, yeah. Um, >> as a community edition, as we now call it. >> Yes. >> Uh, yes. Yeah. Scale across the board here as well. Uh, for quite a while. I was running I was running core and scale. Uh, I made the jump over to full scale with 23.10. So that would have been Kobia back in the day. Uh that was when I made my jump. But initially I was still I I was still kind of doing a lot of stuff with ice scuzzi and the SCST target on scale was not quite mature enough at that point. I said hey it's not >> it's not the apples to apples that I wanted. 2310 was where it started to be good enough and then I went okay I I'll move things over to here. Um I dealt with manually tuning up the arc levels uh before we made that the default and 2404 dragon fish. >> Yeah. So >> yeah across the board. >> Yeah. I think I moved to scale early 22. Um, so it was I know it was >> it 220. Was it like angelfish the OG? >> It might have been. Yeah. I I I I went this is obviously the direction they want to go and and I want to see what it's about and and I think I moved my home server over at that point. Uh cuz that was right after the well we're we're going to deprecate the freess name and and you know roll out this other and I think I jumped right on board. Um, so I I'll I'd have to check my video publish history and see when exactly I installed it, but I think I made a video on it. Um, so >> yeah, it it's been a while. Um, and never looked back, never, you know, uh, >> yeah, >> one of the big reasons, and a lot of people say, you know, why why did you abandon FreeBSD? And it's like, well, we we're not really abandoning FreeBSD. I mean, Chris and I I'll, you know, maybe we'll find a link to it. I can drop it in the comments after this video is done where we actually talked about why we did this. But one of the big uh one of the big challenges of it was >> uh you know the the hardware support on BSD is it's good, but one of the big things everyone was leaning into NVMe. >> Now NVMe hot remove on BSD works great. It's fantastic. Sleep is uh sleep is sleep is amazing there. wake not so much. So, we were having people where he'd say, "Well, you can pull a you can pull a NVME drive out of this unit, but you put it back in, you're going to have to reboot it to get it reinitialized, >> right? >> That's not exactly fantastic." >> Linux, on the other hand, well, it just it just worked. >> Yeah, >> it's fantastic. >> Yep. >> So, one of the reasons, >> yeah, all all that hot swap stuff is uh is very very mature on that side of the the aisle. Uh Brian sends over 25 Canadian dollars. Thank you so much. Much appreciated. Uh dev mode is nice, but removed the apt repo from Fangtooth 25504. Uh which is only 8-month-old. Uh so no apt getit works at all. Uh on the forum, I only got one response that said I should patch and restart my NAS every other week. That's a interesting response there. Uh if you want to drop me a link to that, that doesn't sound like the right kind of response. But I mean the the challenge is that the apt repos that we have up are really there for development. And you know we do pretty quickly shift away from uh especially like 254 would have been in development back at the you know end of 24 early 25 would been when that happened. >> So the like dev mode is intended for development not so much just you know enabling apt on any random system >> right? So, it's it's it's kind of one of those things that's nice, but yeah. >> Yeah. >> He says, "I don't want to check which true version I'm running. I've got 188 days uptime." Okay, that's a good one. >> Yeah, you should update your system. >> Yeah. Uh, [laughter] one final question here I'm going to pick out from Rian says, "Uh, NVMe over fabric for ESXi when or should I migrate to Proxmox?" Well, you should migrate to Proxmox anyways so that you're not giving Broadcom money. Um [laughter] but uh the the question of NVME over fabric in ESXi is dependent on uh actually Brian the current version would be gold eye not fang tooth. Um so 2510 uh so ESXi support for NVME over fabrics is dependent on the NVME target in Linux kernel driver uh supporting what's called fused commands in NVMe over fabric. That is basically conditional execution. So you put A and B together. You say, you know, if A evaluates to true, then execute B. Um then then that's that's a fused command. The uh NVME target in the inbox one for Linux doesn't support yet that yet. Um it is something that we're very interested in and it's something that our developers will say have a vested interest in supporting. So without being too exact on the timeline, uh you should expect it in 26. Uh, we'll do one more question and then uh we got time for an ad read and uh I think Chris was going to step up and uh and grab his beverage here, but uh we'll do this one real quick. Uh Matt sends over another five bucks. Thank you very much, Matt. Uh can't wait for True NAS to support hot swap HBAs. I have a lot of M.2 to six SATA adapters that keep failing for some reason. >> He's just trying to You're just trying to troll me. You're going to tell me they're on a J Micron controller now. And there's one real SATA port and the other one's behind a one to five port multiplier, aren't you? >> Probably. Probably. [laughter] >> Yeah. Um, we do support hot like are you talking about hot plugging the actual HBA like pulling it out of the PCI slot? Because you know it is possible to hot plug PCI cards, but you have to actually power down the slot. You have to have a board that supports dropping the power lines to that to to the lanes before you pull that card out. And >> and I'm not sure any M.2 slots support that natively. Yeah, that's that's something that's a regular PCI form factor. Yeah. Um but as far as supporting like hot swap cards, well, of course we do hot swap all the time. I I got hot swap systems myself. I can go yank drives out, throw them in all the time. >> Um it's probably that your HBA itself doesn't support hot swap. So check on which controller is in there. Uh if it's J Micron, again, uh you know, shots fired here, but get get rid of it. >> [laughter] >> um like uh AS Media the ASM 1164 1166 ones are you know pretty good on uh you know scale Linux- based versions. >> Yeah. >> Uh I think there was there was a weird like thing in there where they wouldn't enumerate more than a certain number of drives or they were trying to enumerate all 32 virtual SATA ports and hanging up. Uh that one got fixed I think at 254. Yeah. >> But the the AS media controllers are okay in there. Uh the Marll cards, I think it was the the 88 882, one of their long ones like 882ME 912 or something is okay. >> Um but yeah, it's you don't necessarily have to buy have to buy, you know, an LSI card, right? >> You can use those. Just avoid port multipliers, whether it's, you know, commandbased or the FISbased switching. Uh avoid the hell out of those. they they don't work well and especially if you got one drive that's misbehaving on the back end of a port multiplier, it's basically going to hang up everything. Yeah. Um just just don't just say no to port multipliers, folks. >> All right, with that, uh I'll let you go go grab your cider and uh we've got an ad read to do uh because today's video is brought to you as always by meter. Uh, while I enjoy building out my home lab and have now spent nearly 20 years in professional IT, not every business has a Jeff working for them. Managing your own network infrastructure without a Jeff can be a challenge. Dealing with different vendors for firewalls, switching, Wi-Fi, and then when something goes wrong, it can grind your business to a halt. That's where Meter comes in. Meter delivers an all-in-one networking stack that delivers uh a uh excuse me, I took my eyes off the words for a second and uh confused myself. Del Meter delivers an all-in-one networking stack that bundles everything you need into a single package, including high-speed wired and wireless networking, power delivery, firewall and routing, and even cellular all in a single integrated solution that's built for performance and scalability. Meter handles everything you need from network design, procurement, installation, and will even negotiate with ISPs to get you the best rates on internet connectivity. All of that shows up in a single cloud-based dashboard, giving you clear visibility into every layer of your network. You get the connectivity your business needs, all for a predictable monthly cost. Best of all, there's no upfront expenses. Meter ships you the hardware you need today and will automatically upgrade your hardware as time goes on, ensuring your users and your business always have the tools and connectivity they need. Whether you're starting a new business, expanding to new locations, or simply modernizing an aging network, let me Meter take care of the hassle for you. Visit meter.com/craftcomputing to book a demo today. Again, that's me.com/craftcomputing. And a huge thanks to Meter for sponsoring today's episode. >> Excellent. >> Thank you, Meter. I can drink for another day. [laughter] >> Yeah. So, uh, I have went to my little basement fridge and, uh, I picked up a Thornberry craft cider there. Um, had a tough time deciding which flavor. I am a, uh, I a cider fan over beer. >> Excellent. >> Uh, pretty moderate. It's only up 5.3. Uh, I didn't go for the vanilla bourbon, which is rocking, I believe, in an 116. >> Oh, >> I fig I figured I'd stay on the lighter side tonight. >> Yeah. Um there's a a cery here uh over in Bend, Oregon that's actually uh grown leaps and bounds over the last couple years called TwoTown Cery. Uh TwoTown Cider House. Um they make a pair of cers that are absolutely incredible. There's the Bad Apple, which is a 10% cider, and then they also have the Super Bad Apple, which is a 12% cider. Um, they taste like apple pie crust. Like they are cinnamony and brown sugar and deep and rich and dark and a little bit of uh little bit of uh like char burn to them. God, they are so freaking good. >> Yeah, this this company has I I can't remember what they call it exactly, but it's much the same. It's kind of like it's got those cinnamon undertones. It's not quite as sweet as an apple pie, but it's it's quite nice. >> Mhm. >> You know, paired well with a Thanksgiving dinner kind of thing. >> Exactly. Yeah. Um uh my wife has celiac disease and so she's entirely gluten-free, no wheat, barley, or rye products at all. And so she can't drink beer. Um and so we do a lot of cers, a lot of wines, uh lot a lot of lours around here, um as well. And we've always got at least one or two super bads in the fridge just cuz it's like, "Hey, want a party tonight?" Okay, >> here's here's a 12enter. Let's go. [laughter] >> Yeah. Rock on. >> All righty. Uh what we got say? Yeah, Matt. Matt here talking. He's saying, you know, uh I don't know what uh power AMD doesn't know what power states are. I'll just unplug my PCIe at an angle. That way the power pins go out first. [laughter] Look, looking to let the magical blue smoke out. I see. Matt. >> Um there there's uh so he's he's working on starting up a YouTube channel called Home Lab Hazards. Um Matt reminds me of all the absolutely janky insane ideas that I would have 15 or 20 years ago with hardware and would try it on my own. Um like uh hey did you know you could double the power delivery if you just shunt this over to pin seven on this or like >> shunt mods. >> Yeah. Yeah. Exactly. And so if there is a cursed way to deploy hardware Matt is three steps ahead of you with uh I've already tried that. I let the magic smoke out of four different things. Here's why it doesn't work. But I did get it to work on number five. It's fantastic. and I can't wait for him to actually get his channel up and actually going cuz it's going to be some fun stuff. Uh Jay sends over another $5. Thank you very much, Jay. Not a professional. Uh I think I missed some TRNA lore. Can we get a rundown of which tech is which today? Uh there's no longer a scale at all. So yeah. Uh what is the actual timeline? Because obviously a lot of my longtime viewers, they remember Freenass. Uh, and in fact, Freass was my one of my first experiences with a non Windows OS back in >> 200 five maybe. >> That's about right. >> Yeah. Um, I I want to say somewhere 0405. Uh, I had an old HP Celeron desktop that I wanted to have some shared files between my my actual desktop and a laptop. And I'm like, hey, let's try this NAS product. That sounds fun. And installed FreenAS for the first time. >> Um, >> that sounds about right. >> Yeah. >> So, so originally um, >> uh, I I would probably brutalize his name, but gentleman Olivier was the one who created Freenaz the first time and the, you know, the the dot the zero dot versions. I think my first one was like, uh, 0.64 and installed that one back in the day. Uh so ex systems as you know that company we took it over in 2009 uh to take over development of the freenz project. Um and at that point that was when free that was freez >> that was sort of when that one was coming out uh to say you know hey this is our new one that's got uh it's got ZFS on there as the primary one or the the only one. Yeah. You know we'll let you import your old UFS pool but we'd really rather you go to ZFS. Um so in terms of like when the shift happened it was uh 20 it was 2020 uh one >> where we said hey we're going to we're going to you know unify we started in 2020 to actually merge the the code bases together because back beforehand it was frennaz and trunaz were separate code bases so we said hey it's just you know it doesn't make any sense. Yeah. So we unified them in 12 and there was just true 12 and it was you know core and enterprise. So core was your free version uh enterprise was the paid one that went on our own hardware. >> Yeah. >> Then uh scale was the one that was created which was the Linux- based version. >> Yeah. >> Yeah. >> So core and scale ran in parallel for a while >> and then um basically what happened was >> core is eventually was eventually sort of uh you know sunset. It's been put into you know long-term maintenance. M >> and eventually is finally reaching the point where we're saying, "Hey, pretty much everybody should be running uh what formerly was called scale and is now called either community edition, >> still scale, still Linux based or enterprise." Yeah. So those were the two. >> Yep. >> Because Yeah. Uh prior to 2022, um all of the releases were BSDbased. Um and uh and like you said, there were there were two different code bases. There was one for free, which was the free open- source community edition. install on your hardware, do what you want. Um, and then there was there was uh it used to be freas enterprise, but then turned into trass enterprise at some point down the line. I don't remember what year, but uh that was the ex systems enterprise version that had high availability as a function and and a whole bunch of other things that was kind of designed around their hardware. Similar code base, but not the same code base. Um, and then uh yeah, they you started unifying the code at some point. you switched over entirely to the TRNAS name uh along the way. Uh in 22, Trunass Core was still the BSD base and then Trunass or Trunass Scale was the uh was the Linux base and then yeah, now we have uh TRNAS community and Trunass Enterprise which is all Linux. >> Um just going back I saw another username here that twigged something earlier. Uh, DH Chang, I think you've actually asked questions on the T3 podcast before, but I saw you asked questions saying, "What was the biggest uh system I'd ever deployed?" Um, I think it was uh 11 pabytes. >> Um, and it's for uh it's for customer redacted, so I can't tell you unfortunately. Okay. >> 11 11 pedabyte system. >> Yeah. So Jay Jay follows up with uh so scale is no longer intended to be a scale out NAS product because of the gluster thing. Yes, unfortunately. >> Um you can still do some scale out if you install minio on each node. Uh you know minio is minio which is a separate question. Uh but you can scale out object with a clustered object store there. But unfortunately there is no cluster under the top of it um where you can put ZFS on top. And I saw Vince's comments about, you know, ZFS on Glustester as a a horrible mess. [laughter] There were some really good ideas and and had Gluster continued, I think it would have turned into a very fruitful product. But um that's that's also unfortunately the nature of open source is is you know that that XKCD comic where if you pull out one peg the whole tower collapses. Gluster in this case was that peg. That would have been really cool, but if it's no longer maintained and you don't have the people to jump in and take it over and things like that. [laughter] Yeah. >> Yeah. Yeah. People are asking uh Plex or Jellyfin in chat. He says Plex or Jellyfin. Jellyfin. >> I jellyfin. >> I am still on Plex. And >> there are there there are some good reasons to stay with Plex. The jellyfin client on certain TVs. Yeah. Like the smart TVs is not there yet. Yeah. >> Um none of my TVs are smart. My TVs are all very dumb. I have a number of NVIDIA Shield devices, >> which I love. Yep. >> Uh I think I have two of them to my left. These little guys. >> Yep. >> Love those. I have one of the regular ones, one of the uh larger Pro ones that you can put a SATA drive in. >> Oh, nice. >> Yeah. Yeah, love those. >> Yeah. Uh we're we're a fully Roku house here. Uh we uh we have a bunch of dumb TVs. Uh I have one smart TV that uh is not connected to the Wi-Fi in any way, shape, or form uh because it's a Samsung and it's a piece of garbage. Um but uh so we have Rokus on all of our our devices. Um and basically they run Plex 95% of the time. Uh, so they just start up, we go one click over, it's the top left icon on Roku. Um, and for that reason, they just run great. They're they're inexpensive. Uh, that's all I'm doing with them is is essentially running Plex. Um, uh, we've canceled all of our streaming services essentially here. Um, I don't have anything that, uh, we still do have an Amazon Prime membership and so we still have Prime Video. And then I think because friend of a friend kind of thing, I think I have an HBO login uh and and we'll occasionally watch things on there, but that's that's really it. Um Oh, and I subscribe to Dropout. Subscribe to Dropout because Dropout's great. >> Um but uh I I run Plex because I've been running Plex. Uh, [snorts] and and I have the lifetime membership, which uh, full disclosure was given to me by Plex because I I did a couple of things for them like six or seven years ago. >> Um, >> I mean, it's it's different strokes. I mean, it's it's not like it's not a one-sizefits-all solution. This isn't a holy war. You don't have to swear allegiance to one side or another. >> I got somebody says, you know, I got Plex as a primary. I got Jellyfin as a backup for if, you know, the internet's down or something's going on. You know, people running people running MB. They're saying, hey, why not? you know, I've seen MB as well. >> The third position. >> Yeah, in in fact, I've I've been actually considering because my entire Plex library is just on a trunass server. Uh it's not actually directly running the there's no reason in the world I could also fire up an MB or a jellyfin and just point it at the same library. Um and so I've been considering doing that as like a side project, just kind of hey, how does this work? And that's that's kind of the point of the whole microcloud system that I've got right now is I've got all these other nodes with a whole bunch of extra compute that if I want to just try something, I don't have to mess with my production boxes. I can just slide in a new node, install Proxmox, throw MD on it, done um and give it a whirl and and not touch my actual servers that I use every single day. >> Exactly. There's a difference between home lab and home prod. And that's the important line is who gets mad when it breaks. >> Yes. If it's important, if it's not you that gets mad when it breaks, well, that's home prod. Don't mess with it. >> Yes, that's exactly right. >> Your DNS, your DHCP, you know, Adguard, stuff like that, that lives in my home prod area. Don't don't mess with those. Um the the problem I went through is I I I I've run a so much production in my home lab, but I've also done so much consolidation that it's like, you know what, if I just threw a bunch of cores and a whole bunch of memory and a whole bunch of NVMe and a whole bunch of drives, I bet I could consolidate my entire server rack down to like three boxes. And I did. And I got down to like 550 watts of idle power draw at some point. [snorts] And uh I had >> But but that's just no fun, >> right? That's just no fun. You know, I I have no reason for a microcloud. You better believe there's probably three of them on my eBay watch list right now, and I'm scheming how to get them into the country without paying, you know, $500 a shipping. >> Oh, why am I not um sorry, dinner order? Um, >> no worries. >> Yep. Uh, but yeah, no, I I consolidated everything down. Um, Wendell over at Level One uh dropped me a a 64 core Epic Rome processor, and I'm like, "Sweet, let's run everything off of one box." And so I had virtualized truness on that box. I had all of my VMs running on that box. And the problem is if I need to do anything to that box, everything dies. >> Yep. So now I have eight nodes in a chassis, four of which are running production, but the other four nodes could be anything. >> Do whatever you want with them. Yeah. >> Yeah. Um, >> cool. So that's been a lot of fun to get that whole whole system up and going that and it's one less U. It's actually three UU instead of 4U on on that box which is great. So >> it's it's one of those weird things where it's like it's not quite a regular 4 U. It's not a 2 U. It's not a skinny little one. It's it's that you know >> Yeah. >> middle ground. >> Yeah. >> But is it's uh you know the I think there's some 3U storage boxes from Super Micro that let you pack I think it's I think they get 16 drives which is >> pretty nice density. >> Yep. which is actually uh uh that's uh the same density in the microcloud. The microcloud is a 16 bay uh array on front and then >> two wired to each bay. Right. >> Exactly. Yeah. And then eight nodes in the back with two direct wired to each bay. Um and so not great if you wanted to run, you know, a storage server out of it, but really great if you wanted to run a high availability compute cluster. Um you can uh you know it was really designed for like web servers and things like that in mind. So >> um but in my environment >> game server something like that. >> Yeah. In my environment it's perfect because I've got somewhere between eight and 12 cores of compute on each node. I've got a graphics card in each node. So I've I've got full transcodability encodability. Um and then I'm booting off of a pair of 1 TB SSDs on the front. Um, I've also got uh on one of the nodes a pair of eight terbyte SATA drives running Proxmox backup server. So it's all backed up internally to the same chassis, although they are separate nodes. So the only thing that's going to kill everything is a power supply failure. Um, and it just runs great. >> Yeah. Well, I mean that's that's a good way to do it is like having having the multiple arrays of, you know, independent computers, never mind drives, >> right? Good. Well, I mean, I I think let's uh let's get into looking at a couple things here. You know, people are talking about, you know, there we go. Uh YouTube sucks so bad. Well, unfortunate username for me to read on YouTube live, but uh [laughter] he says, "I purchased a truness M40 for work. It's been working well for the last two years, powering vSphere and now OpenStack alongside Seth." Well, I'm glad to hear it's working out well for you. Thank you very much for uh you know, supporting us. it. Let's let's just keep, you know, everybody who buys Enterprise, when you do that, you're letting us keep giving away the free edition to everybody else who doesn't. So, right, >> thank you very much for that. We greatly appreciate it. >> But yeah, like let's uh let's get into busting some of this like ZFS. >> I was going to say let's let's do some busting because that that sounds like fun right now. >> There's there's been a lot of this what we call tribal knowledge that's run around. Um, you know, maybe it used to be valid back in the day or maybe it was one of those like, you know, Wikipedia cytogenesis things where somebody wrote it on a blog once, it got picked up as a piece of authority, it got fed back around into Wikipedia, and then it just got reinforced over and over. So, I mean, like, we we can we can run down the list of them and I mean, start throwing them in the chat there if you've heard anything that you're like, "Hey, is this true about ZFS?" You know, we we've done the ECC one, so we we busted that one pretty quick. I like I think let's start. >> How much memory does ZFS need? >> How much memory? Uh yes, >> let's start let's start there. Um because uh this one is actually rooted in some some actual recommendations from externs in that the the recommendation on the official documentation used to be 8 gigabytes minimum and then one gig per terabyte of whatever raw storage you have. Um and I've I've actually repeated that on in video myself uh you know five six seven years ago as yeah this is what is recommended and this is what you should probably follow. >> Yeah. >> In reality the the reality has overtaken I think what those requirements were were meant to be about. And so what's the reality today? So I mean really what that is is we say you know 8 gigabytes is kind of that's that's our floor that's going to let you load every potential piece of our middleware every you know the skeleton of every service for sharing over all of the supported protocols all at the same time and it's going to leave you a tiny little bit for your arc your adaptive replacement cache. M um the whole 1 GB per terabyte thing is a general rule of thumb to talk about. You want to be able to keep not only your most important metadata, so your descriptors of like, hey, where where are my where does my stuff actually live on my VDES, >> but you also need that because your RAM is your primary source of performance for ZFS. It's your adaptive replacement cache. It's feeding that. So you say, "Hey, let's, you know, we need to put stuff in here." And then it's got to be big enough that you can actually let your data sit there and be hit more than once. Because if you have so little memory that you're just cycling out everything you throw in there as soon as it lands, yeah, >> you don't get the benefits of the insight. So, you know, you can say, you know, is it is it one terabyte or or one gig per terabyte? Is it 5 gigs per terabyte? It's very very workload dependent. So if you're just doing, you know, like you said, if you got, you know, seven 7 tab of Plex media or you got, you know, 80 terabytes of Plex media or something, you can probably do that on 16 gigs, honestly, because your workload is so very much sequential. You can serve it all off the discs. You're not dependent on that memory to derive the performance you need. Now, if you're doing even 800 gigabytes, but it's all heavy-hitting VM stuff, >> Mhm. >> you probably want 64 gigs in there. >> Yeah, >> you you beat me to the You beat me to the point. I I was going to bring up the the sequential, the home lab use, the uh versus like production, productivity, database, VM, um as as kind of the breaking point because kind of like I said earlier um about what are your expectations for performance? what are your expectations for uh for reliability, uptime, data integrity, whatever else is a very much sliding scale, especially at the home lab level. And if you're running TRNAS in production with ZFS and you've got these massive SQL databases and and they're running 247 doing doing finance traffic, um you want 100% reliability, as little latency as possible, everything possible stored in L2 arc. You want ev everything at the highest availability level that you can possibly get it because that's how you make money and any any in any any differences in that data is going to cost you money or time. Um, and that's not good for your business. If I'm running a Plex library with 7 terabytes in it and I'm streaming at most a 40 GB 4K Blu-ray rip at RAW at what is that 15 megabit? Yeah, I I was saying >> 4K. Yeah, 4K is going to be like >> 4K like 120. I was going to say megabit >> 12 maybe. Depends. Depends if you got like that uh was that like that 7 724 the DTS, >> right? Yeah. Between 100 and 120. If if you're on Lord of the Rings, maybe closer to 80 because holy crap, that's a long movie. >> Um yeah. Um, it it's it's a difference in expectations and it's a difference in use case. And you don't need to store your entire Blu-ray of Lord of the Rings in L2 Arc to get good performance out to your flexbox. >> Exactly. That's That's pointless. Um, >> so so I shouldn't have gone out and bought all of this Optane dims. >> Oh, the the Optane dims are great. Um, but use them use them for uh, you know, set them up in persistent memory mode. use them as your specials, your L2Rs, your SLOG devices. >> Um, I'm actually seeing something from Vince [snorts] here in the chat and it's something that Chris and I actually called out on a previous podcast where he says if you're running a database that does its own caching, you actually minimize the arc. So, there's actually a tunable you can set on a per data set basis. It's called primary cache. So, you do ZFS set primary cache equals and it's either all metadata or none. You basically never want to say none. So, it's all or metadata. So if you have a client system like a DB that is leveraging that data set that does its own caching >> set it to only put your metadata in arc >> because it's going to cache itself on the client's you know the client side it'll be more intelligently able to do that. The exception to that though >> ZFS compression arc is compressed. So if you have really compressible data and you're getting like a 2:1 ratio by letting ARC compress it. Okay. Well, maybe you downplay your caching on the client side. Let ZFS handle it. It's basically you're going to have to test this stuff. Say, hey, is it better to have half as much fast cache on the client or twice as much slightly slower cash over the network through ARC? >> Yeah. >> You know, test the stuff. >> It's it's almost like different use cases have different performance expectations and different tunables to to make that that use case a reality. Who who would have thought? [laughter] >> No [snorts] kidding. They things are changed differently. So, somebody's calling you out for for flexing your Optane DIMs and stuff. Let [laughter] me let me just pull Let me just pull open my drawer of SSDs here. I might have a few spares kicking around. Um, there you go. The most most expensive thing in my drawer um would be this probably. >> Oh, 96 gig DDR5 kit. >> 96 gigs of DDR5 CL uh CL30. Do you need some Optane dems, Chris? [laughter] >> Just a few. Just a few. How How many uh Now, how many terabytes of Optane do you have for that? >> Uh that's about 2.8 or so. These are 128 sticks. Um lovely. So, I think there's 28 of them here. So, >> yeah. >> Is >> those are exc Yeah. People are now get you giving the Optane love in the chat. Um yeah, the king is dead. Long live the king. >> Exactly. >> Um >> somebody was asking your Discord. They said, "Hey, what's going to be the substitute for Optane?" Like, you know, at the the enterprise level, there's nothing that's going to be quite as good on that value for money for your your home lab, your home prod user. Um, we can't go out and buy like a Kio FL6 or a DAP store accelerator 2. >> Those are what it's called, storage class memory is what they're branding it as. So, uh, like Kioxia XL Flash, Samsung Xenand. Um, but like fast NVMe is pretty close to the way there. Like I've seen some that are probably I'd say they're about 80% of what an Optane is going to do. Yeah. >> And that's that's at your small block size, >> like your your 4K stuff in your purely stuff up at your one megs where you're trying to do like bigger loads in there like uh you know, I hate to say it, but everybody's talking about you know, oh AI models and how fast can you load them. Yeah. But if you're doing big like sequential IO's, big reads, big writes like that, you're you're pretty close to the sequential throughput of Optane on a modern NVME card. >> Yeah. Um I I had an interesting thought live on the show maybe about two or three months ago. I think I was talking with Tom Lawrence about uh we were talking about the Nvidia in the $5 billion investment into into Intel. Um, and uh, you know, we're talking about all these RAM storage or the the RAM prices potentially going up and storage prices going up and things like that. And I said, man, Intel spun down Optane 2 years ago because Optane was kind of ahead of its time for what it was. It's great technology. I think there's so much upside to it, but they didn't have the customers that they needed at the time and it made sense to spin it down. They tried to sell it off. They tried to sell it off to so many different people and no one bought it. And so Intel still owns Optane, they just don't produce it. Um, with all of this going on right now, uh, with op with, uh, production shifting over to HBM and and NVMe over fabric becoming a real player in the AI space to dramatically increase uh, uh, your available memory to to GPUs. Is Nvidia sitting on a potential gold mine with a partnership with Intel to bring back Optane from the dead via NVMe over fabric or memory interface or some hybrid in between and completely bypass the whole HBM DDR5 requirement? >> It's it's possible. I mean, it's it's one of those things where you go, hey, they they've got their proprietary NVL link that they're running in like the terabit level speeds, >> right? So, could they just like make a Yeah, yeah, or Optane was a joint venture with Micron, right? But could they just make like some card where they say, "Hey, this is going to be like um you know, it's NVLink compatible and it's just a a crap ton of Optane." >> Yeah, you just drop four terabytes into a PCI Express card or some crap, >> right? >> Yeah, NVMs are usually NVDIMs. I'm talking about batterybacked RAM in this case, Vince. Um not like a PM Optane device. I'm not talking about like the DCPMMs. I'm talking about actually like 16 to 32 gigabytes of DDR4 or DDR5 with a super cap in it and a bit of flash to back it up. Those guys are the ones that we stick in systems when we want to say like, hey, we want to measure in gigabytes per second of throughput at 4K, >> right? >> Yeah. >> And and what's funny is that's always what Optane was great at is those random seeks, which is what AI truly needs to function well. And so is Nvidia sitting on a potential gold mine? Is is that something that we might see spring up in the next 12 months? It's going to take some engineering time over the next like this isn't going to be something that they just go, "Oh yeah, we'll just crank out some more Optane dims and we'll plug it into to a to a Blackwell card. It's going to take like 12 months for them to figure out exactly how they want to interface it, whether it's PC Express, NVLink, even Infiniband, whatever the hell they want to do." Um, but uh there's potential there that Optane may come back from the dead. I have no inside sources on that. This is pure speculation, but really fun speculation. [laughter] >> Yeah, but I I would love it to come back. I mean, I I'd even just love to get my hands on a few more of those like uh 58 gig or 118 gig little uh the P1600X cards. Those are great cards. Those >> those died too soon. >> Yes, they did. >> Yeah. >> Uh I I still run a bunch of the 16 gig uh Optane M.2s. twos just for appliance drives because >> boot devices. >> Exactly. As you said, they'll outlive the heat death of the universe because >> infinite right right uh endurance on them >> pretty much and you know they're they're PLP safe inherently because the design of 3D cross point NAND. >> Yep. >> So because they just they'd write straight to there. They don't get buffered. So that's fantastic. >> Yep. I'm sure like I see people talking about how they use them you know some people say hey they use it for a boot pool. Uh, you know, all my log devices are Optane. Optane was a absolutely fantastic log device. Probably the best one you could get as a consumer. Um, >> yeah, ZFS log device. Again, it's a log device. It's not a cache. No matter how many times people say it, it's not a right cache, right? >> Um, you can you can cheat and make it act like one by cranking up the values of like the ZFS dirty data family. Yeah. Um because by default that's only 4 GB topping out out of the box on Open ZFS or on Trunaz unless you tune it up higher. >> Yeah. >> Now you can you can crank that up higher if you have the memory to hold your transaction group in there. So if you say, "Hey, I've got 128 gigs or 256. I think somebody posted in uh I don't know who who's who's got the most amount of RAM in their system." Sound off in the chat. But if you got 256 gigs of RAM in there, Jeff's turning around. I win. >> How much how much are you repping? Terabyte. uh one and a half in DDR5 6400. >> Okay. In a ZFS system. >> Uh no, not in the ZFS system. I I've got 256 in my ZFS system right now. >> Okay. So there you go. So if you have >> So if you if you have 256 in there and you say, "Hey, I want to bump up my dirty data." >> You maybe you're going to increase it from four to eight or even 16. >> And what that basically means is you're going to have up to 16 gigabytes of pending data that's going to get written. Now you're basically, you know, the the log device is holding like a CC copy of those rights is what's happening. So the rights are going into RAM. They're also being CCD to your SSD. So Optane works really well for that. >> In normal operations, all that happens is the transactions are spat out from RAM to its final resting place on your VDABS. And then you're just discarding those groups. You're discarding the data off the disc. Yeah. >> It is never read back from that log drive unless your system crashes and you have to replay from it. >> Y. So, all you need from that log drive is it's got to be big enough to hold that 4816, however big you set it, buffer. Um, and then, you know, just do it very, very quickly, [laughter] you know, put it in and then get rid of it. It's just got to return and say, "Hey, okay, I've got that data. It's safe." Sends it back up the chain. ZFS says, "Okay, great. It's good." That's it. And then it writes to there endlessly and then just tosses out. And so that's why you if you smart stats or anything on a a log drive, it's going to be like, hey, terabytes, pabytes written, and your read's going to be like, oh, 5 gigs. >> Yeah, >> I I read five gigs from me once when my system crashed and I had to replay the log files. >> Yeah. Uh, Yakto, I think, wins by default with two terabytes in his. Um, although I will say I think I sold him half of that. [laughter] >> Yeah. Wow. That's That's impressive. >> Yakto's our our resident uh humble home lab uh with >> humble home lab >> with simp with simply 400 amp service and five racks in his garage. >> Good lord. [laughter] >> I I won't ask you to even justify that because having that is justification and stuff. >> No, you don't need to do anything else. >> Yeah. >> All right. Uh let's take a We got a couple questions here popping up. Uh we got green protagonist with $5 here. Thank you very much for that. He says, "I have four 800 gig SATA SSDs for 4 VM game storage." Uh, he says, "Should I do one disc per? Should I do a RAID Z1 SMB volume or some kind of multi-verirtual disc with DDUP?" Um, now Jeff, you had a great video about using uh, you know, ice sky zols with dduplication. Um, in this case, I'm going to say >> now are you're using Proxmox there, Greenpot tag. So I'm going to buck the I'm going to buck the trend a little bit here and say you're going to use NFSV4. You're going to use your qcows and you are actually going to hand you're going to copy those which is going to let you use block cloning which is in trunaz. So that's like ddup light. You're not going to pay the memory overhead of dduplication because you're basically going to just copy the underlying qcal file from system one or you know your gold system. Make a system zero and copy it to systems 1 2 3 and four. you basically have it's you'll have zero overhead for that because it'll be like a snapshot. All of your systems will be able to have exclusive access to the discs back and forth for their reads, their writes, their little updates, and you're only going to be writing whatever little deltas happen in, you know, config files, updates, caching. When you get a big update that comes in, hey, you update your gold image system zero, you get a big patch that comes in for a game. Um, and then recopy out those underlying drives. So, you know, you have if I I saw you mentioned earlier you're stuck on Windows for it because you're using the P4s. I feel your sympathy. That's where I'm at, [laughter] right? Um, so I have a C drive in these machines and I have an S drive for Steam. So, when I need to do my updates, I update all my S drive on the main machine and then I just shut down the other two, copy over those two disc files, overwriting them in place, which zeros out any deltas that happened. The new patches are applied. I fire up the two VMs and we're off to the running again. It's great. >> That's pretty great. Yeah. >> Yeah. >> Um probably the way that that I would do it is I would probably run all four drives in something like a RAID Z. You get a little little bit of redundancy. Um >> Oh, yeah. >> And uh and I I like enabling, you know, again, if if we're aiming for like a little bit simpler but a little bit more overhead, I like the idea of running those in a RAID Z with DDUP enabled. Uh because yes, every drive is going to have to or every oh VM is going to have to download its updates, install Steam games, etc. like that. But you're only going to incur the rights from one of those because the data already exists on the drive. Um when the systems are simply running though, you get the benefit of read speed of three drives. And so it's going to dramatically speed up uh whatever operations versus just pushing one SATA SSD through which you're capped at 550. Um so >> yeah, I forgot to mention that. Yeah, absolutely. Put them in a RAID Z because the the worst thing to happen would be that you know one of them dies and hey this VM's down and you got to figure out if you can move it to another or something. >> Put them in a RAID Z. >> Yep. >> Yeah, Z1's fine. >> Yep. That's what I would do. >> Yeah. Uh we got a question here from Brian Vaz. You want to read that one there? >> Sure. Uh Brian, uh thank you very much for the $10. Much appreciated. Uh even if it's Canadian, we still accept that here. Uh although I'm going to have to pay a tariff on that, I think. Uh Western Digital announced they're relaunching dual actuator drives today. Any chance to add support for DA SAS or SATA drives? Um we talked about this uh right before we went live. We were talking about dual actuator systems. >> We were actually talking about that. We said, "Hey, WD launched their dual actuator drives." Um, and I'm not surprised by that because they are, you know, the, if you look at lib ZBC for zone block control, um, they actually own that GitHub repo. So, I'm not surprised they finally put some, you know, dual actuator out there into the wild because they're saying, you know, hey, we want to be able to do this like weird stuff with the rights back and forth. >> Um, so it's it's kind of like um adding support the challenge with them. So the dual actuator SAS drives for Seagate and I'm right now working under the assumption that WD is going to do the same is they work by basically having two LUNs on a single SAS port. Now SAS of course has you know two paths but in the true NAS enterprise world we use those to hook to two separate controllers as in you know active standby kind of world. >> They're used internally in these dual actuator drives because they're basically using it to link from you know one port to another internally in the drive. So it only supports a single external SAS port. >> Yep. >> So for us from an enterprise perspective, it's like, well, those drives aren't aren't they aren't any good to us. >> Yeah. >> Except in like a single head unit like our R series. >> The SATA drives, [laughter] they're a little more complicated. Um, and if WD does their if WD does their SAS drives like the Seagate does their SATA, which is just an LBA split right at the 50% mark, where they say all the LBAS below here are on actuator one. all the LBAS above it are on actuator 2. We could theoretically do that. The challenge is going to come in with making sure we have the uniqueness. Um we need to build something in there so that in the middleware you're properly being guided to not have, you know, >> two parts of the same pool on the same actuator cuz you might have two actuators, you have one spindle still. >> Yeah. >> Um so that one drive dies, all of a sudden you've lost the equivalent of two discs. And so if you make a RAID Z1, we want to make sure you don't accidentally give yourself a you know a deadly failure domain with one drive. >> Yeah. >> There's also there's also looking at the oddities of the firmware like the Seagate ones for example. There are certain scuzzy commands that when issued to one unit will cross the LU boundaries. Um write C you know flush cache is one of them which is a performance impact. You know you might have a little bit slower performance than you expect but you can manage with that. Uh the other one that crosses boundaries is like uh scuzzy format format unit command. So you think you're formatting one pool, you accidentally nuked part of your >> accidentally nuked two drives instead of one, right? >> Oops. Yeah, that's that's a big mistake. And that ends up with your data going in the forever box, and we don't want that. >> Yeah. >> Um DA drives probably belong a little bit better in a SE environment, not necessarily ZFS. I'm I'm going to be a little bit bold and and state that uh simply because of the the the standard recommendations for like your VDE size and things like that. Um if you build a Z2 uh and let's say you have an absolute failure of of a drive controller on one of those discs, one of those discs take took out two of your discs. That's all of your redundancy. Seph is a little bit more resilient in in that regard where it's a little bit more spread out. Uh with ZFS, if you lose those two drives, you're you're down to nothing at that point. >> Um >> you're hanging in the wind there. >> Exactly. And and so while you could probably use this and while I'm sure you guys are going to implement something for for for DA drives that that you know makes as much sense as you can possibly make of it. Um it feels like playing with fire to me if if I'm being honest. >> Juggling chainsaws again. >> Yeah. [laughter] >> Yes. But >> yes, but you you can do it. But and and I see people commenting that people are using Wendle scripts for the SATA base dual actuator. Mhm. >> Um it's it's there and it's one of those like yes it works but we can't formally support you on that. It's kind of one of those things where if you're if you're good enough to be doing a script to hand partition your dual actuator SATA drives I mean hey you're you're probably awesome enough that you you can do that and we do we expect you to be able to understand there there might be some consequences to that right in terms of the the true firmware is probably going to get a little bit confused. >> Yeah. >> Yep. We got a got a couple of them sneaking in here. We got a couple more highlighted ones. >> Yep. Uh Cosworth sends over five bucks. Thank you very much, Cossworth. Uh Yakto secondhand gear is what Techno Tim uses on videos. [laughter] Um I'm slowly transitioning my storage from Synology SHR2 with cache to ZFS on Unrade. >> Awesome to hear. Uh we do not drive lock our uh you know, we don't firmware lock our drives by the way. >> Right. >> Whether they're NVMe, SATA, SAS, we don't do that. That's that's not what we do. Uh Jason Coyle uh sends in uh the the donation of the beast here. Thank you [laughter] very much for that. Um uh it says, "When is Spice going to die and be replaced by VNC?" Um VNC I believe is it's actually the default now in ours. You can reenable Spice. Yeah, >> in our in 20110, but VNC is the default. Um it's uh there's there's a number of open source free VNC clients. Uh man, it just works a hell of a lot better. >> Yeah. than Spice. Uh but if you really want it, you can have it back. >> Yeah, >> you can set it up. You can create a Spice virtual display driver and have that work. >> Yeah. >> Um >> couple of them sneaking. >> Yeah, it it is the default on on 25. Now, I know uh Proxmox is still on Spice. I I wish they would be fully on board, but at the same time, the nice thing about Spice is it's it's web native and if I double click on a VM, it just opens the window and everything everything mostly works. And so it it has pros and cons. So >> yeah, couple more sneaking in there. Uh someone says, "When is ZFS rewrite coming to Trunaz Scale?" Uh it's there in 25.10. It is a u undocumented sort of command line one. you can ZFS rewrite. What we don't have support for is the ZFS rewrite with physical birth time. So that is the - capital P. Uh that's going to come in true. Uh DH Chang says, "Is there true right caching coming for TRNA/ZFS or do I need to make an SSD array?" Um that is an upstream question for ZFS about having an actual right cache in the way and being able to have stuff land on an SSD. That is a very interesting question and I will leave it at that. So, it's a good one. >> My my phone has received like four spam calls in a row. Uh >> they're persistent. >> They are persistent. Yes, >> they're persistent. >> I keep looking down and going, why is my phone ringing again? No, that's a different number. Same area code. Uh but uh >> I so so since since I know my kid's asleep and he won't be able to hear me, I'm planning on getting my youngest his first cell phone, which of course will have all kinds of parental controls on it. Y but you know, I got it. I signed up. I got it reserved to a number, signed up for his plan. >> 4 hours until the first spam call hit that number. Y >> never been used by anybody. They're they're just they're just robo dialing all the way down like >> Yep. >> Well, welcome welcome to being online. It's like putting a like putting a VM online with no firewall. You're going to get you're going to get people knocking. >> Yep. Exactly. Uh we were uh doing something with my daughter's account uh uh this last week and uh my wife wasn't talking to my daughter at the time or or anything like that. She was trying to activate some some service that we were trying to add to her or something like that. And uh she goes, "Hey, so you should have gotten a verification code. Did you get that text?" She goes, "Oh, I got a text, but it wasn't anything I asked for, so I just closed it and I went, "Good girl." [snorts] Excellent. >> Right answer. >> Yes. >> Right answer. >> So, she goes, "Yeah, >> I wasn't expecting this." >> Yeah. I wasn't expecting this. It said, "Hey, here's your activation code." She said, "I would I didn't ask for that." And she swiped and I went, "Yes." >> Yep. Right [laughter] answer. You raised her. Well, good job. >> Exactly. [snorts] >> Vince sends over 20 bucks. Thank you so much. Uh, error 419, you're late on this one. What What is this going at the 90 minute mark? Uh, I I appreciate it though. Uh, error 419, page expired. Data nuked due to firmware being awesome. >> It's it's it's not a bug. It's an it's an exciting new and un that's number five now. >> Nonstop non-stop with this. >> Yep. >> Oh, wow. Um, who else? We We had a couple other topics we wanted to dig into just about like more myth busting. We're getting back to that. Yeah. Uh, we talked We talked about one gig per terabyte. What's one of the other ones everybody's always rhyming off, right? One gig per terabyte. How about 5 gigs per terabyte? What's that one for? What does everybody always say that one's for? >> It's for DD, right? Yep. >> DDUP. Now, that one has a kernel of truth to it. >> Yes. 5 gigabytes per terabyte is assigned if you're using DDUP and all of your data is at a 64k average record size. So your usual like you know office documents kind of files stuffed in there. >> Yep. >> And and that works when you're doing 64k average record size. The problem comes when people flip on ddup onto stuff like ice scuzzy zols which works you know it works okay right up until it doesn't because you went from a 64k record to a 16k record which is 1/4 the size you have four times as many records you have four times the RAM footprint you need 20 gigs per terabyte um before you start overspilling your dup tables out of memory >> now can you back that up now with special VDES to say, "Hey, I want to put my DDUP tables on SSDs." Well, you can do it now, but back before those existed, if you exhausted your RAM and you were using DDUP, you typically had a very bad situation going on, you would get that warning when you started to walk to the edge of it where you went, "Hey, if I deleted, oh my my system's real slow." >> Yeah. >> Oh, and it came back after 30 seconds. I guess it's okay. It didn't happen. That was your warning. And if you didn't pay attention to that and you kept on going and loading more and more data, eventually you hit a spot where you know you you needed 20 gigs and you had 16. That remaining four gigs of your DDUP table is on spinning discs. >> Mhm. >> DDUP entries are sub4K. >> Yeah. >> And are basically going to be randomly dispersed across your pool. >> There's a bit here. There's a bit here. There's a bit here. [laughter] There's a bit here. Boy, what discs? Absolutely. Spinning discs love random small accesses, don't they, Jeff? It's their favorite thing to do. They love it so much. It is what they will spend all of their time doing. >> If if you have a sub4K record, you can easily access that at the blistering speed of about 1 megabyte per second. >> And you have four gigs of ddup table waiting on your disc. Mhm. >> So, you'd better hope you get a hit in those first 16 >> because if you don't and it has to start walking your on disk tables, >> you are in for some pain. >> Yes. Because if if you're doing the math properly, that's 4 gigs of on of on table uh with 1 megabyte of of sub4K speed. You're looking at 4,96 seconds to access your table. [snorts] >> You're going to be waiting a while. >> Yes. And that's going to happen for every transaction group that comes in. >> Yeah. >> Which by default close off after 5 seconds. So for every 5 seconds of writes, you're waiting 4,096 potentially. >> Yep. [laughter] >> But that doesn't scale well. >> No. And that's for one user for one file. >> Yeah. And considering the default dead man timer in ZFS is I believe 120 seconds where it says uh hey your your transaction group failed to complete. >> Yeah. >> If it doesn't complete in time we're going to kernel panic you. >> Yep. >> Because you're leaving your data hanging in the breeze and we we figure your system is broken or hung. >> Yep. Yeah. That this is this is a request that hasn't been fulfilled yet. We're now just going to ignore it. [laughter] Yeah, this this is this is why um a lot of times when you said DDUP on especially the old Freenas forums, uh people had that sort of we'll say visceral reaction to you advocating for DDUP. >> Yeah. >> Um it was one of those things again it's juggling chainsaws. >> You need to know that you can do it. You need to know that you're using the right workload for it. >> And I mean your your video there where you're advocating for it in the Steam libraries, you're you're pretty clear about hey there's risks to this. especially in the case where if you're doing forms, you can burn down your guest VMs as well and say, "Hey, I'm going to reclone those." >> Yep. >> And you know, you kind of reset that table. That works great. >> Yeah. [snorts] >> Yeah. That there's a lot of really good uses for DD. There's a lot of use cases where you don't need DD. uh like like I I advocated for in that video, like if you're doing a Steam library where you've got six people who are accessing the same, you know, set of drives and they're all going to be downloading a similar game, there's no reason you can't enable DDU because they're probably not going to be accessing the same data at the same time. And updates happen pretty regularly. So if one client updates, they're all going to see that same update within a couple of days of each other. They're going to write that data. It's got going to kind of equal out. your performance should stay pretty good even with minimal memory usage. Um, if you have an environment like an office environment, uh, like I I used to manage environments that were, you know, 3 to 5,000 users pretty easily, they would send email attachments with a PDF in them, like a 20 megabyte PDF file that everyone would just drag to their desktop or their downloads folder, wherever it went, and all of a sudden we had >> or they just sent to everyone at, right? >> Right. And so all of a sudden we had 5,000 copies of a 20 megabyte PDF taking up space. We enabled DDUP on a lot of home folders, but we did it with a massive amount of memory um and to to avoid issues like that. And what's great about that is you're paying a little bit extra in memory back when memory was cheap. Um and and you're not sacrificing 20 megabytes a shot for every user who downloads that file. Um because that adds up really quick as well. Let me tell you. >> Yeah. Um, if if you're in an environment where everyone is using mixed file types, no one's really accessing the same data, they're not doing accessing the same files, DDUP doesn't make sense at all. And and you're just asking for that performance hit. You you really have to leverage what your environment actually is and how your users are actually utilizing it to know if it makes sense for you. going to say I think my chat stalled for a while so I refreshed here and I'm just seeing a bunch of other questions here. Uh let's see. We got a Jason for $5. Answer on air for $10 regarding to my spam call. Uh I was thinking about doing that at least listening for a couple of seconds and then hitting speaker if it's if it's something fun. So uh if they call back I I'll probably I'll probably pick up. Um Jay $5. Thank you so much. Uh his name is Jeff. In this house, we practice skepticism of social engineering computing. [laughter] >> The [snorts] the best way to hack a system is to hack the person using it. Absolutely. Social engineering. >> The the one constant is hacking has always been social engineering and always will be. >> Attack network layer 8, folks. It's the most vulnerable. >> Yep. >> Yep. >> American Cosworth, another $5. Thank you very much. He says, "I got a fishing call saying they were from the government agency I happened to work for. It was amazing fun how much I messed with them. That is that is fantastic. I love that idea. >> Um I got a call from a dude one time who was really really slick sounding. Like it was a real person but doing a scam call. It wasn't a robocall or this is Apple, you know, verifying your $1,700 purchase of a MacBook Pro. Uh it wasn't one of those. It was like a real person. Um, and uh I I don't remember what they were asking for, but it was it was some sign up for this time share BS, whatever. And uh um at the time I happened to be working for a state agency. Um and uh so so they called and I'm talking to him for a minute and I said, "Okay, well I'm going to go and ask you to put me on the do not call list." So he goes, "Okay, well, you know, if you don't want to take advantage," I said, "Well, the proper thing is to like add me to the list and confirm you've added me to the list and then hang up." And he goes, "Oh, well, that's fine." And I said, "So, have you added me to the list?" And he goes, "No, I'm not going to do that." And I said, "Okay, cool. Uh, if you call again, my lawyers will be the ones who answer the phone then." And he goes, "Oh, you have legal counsel on retention for spam calls." And I said, "No, but I happen to work in the uh attorney general's office here for the state of Oregon. Hey, Tim, what are you doing?" IMMEDIATELY HUNG UP. >> [laughter] >> THAT that is an excellent way to do that. >> You got about FIVE MINUTES FREE. I GOT SOMEONE YOU really need to talk to right now. [laughter] >> Yeah. Oh man. And to think I would just mess with them and you know say that you know I I run Linux or you know the air duct cleaning ones be like no no I I have geese. I you can't clean my ducks. I have geese. [laughter] I would just mess with them that way. Uh, all right. We got a great one here from Jack Sparrow, donating $20 to the stream. Thank you very much, Jack Sparrow. A question for Chris. Can Trunz Scales hope to see a network mode option in the app deployment dialogue with a container other app option for for reasons? Now, we're we're talking about tunneling everything through something like a hypothetical WireGuard or a VPN, and you want to be able to to point there because the Linux ISO you want isn't available in your particular country. Hey, I get that. It happens. Um, >> there's about 38 states where Linux ISOs aren't available right now. I I totally understand. >> Exactly. Um, so we're we are working on, you know, expanding the network connectivity. Uh, the challenges with that is especially when you're making apps dependent on other apps and then we have to create things like a startup order. Um, >> it it's not something we had initially envisioned needing to design for for people doing this like nesting doll of apps. We had somebody who said I run my pfSense VM on trunaz and trunaz itself is gateway through that VM. So they got themselves into this like uraorus of the middleware wouldn't fully start the apps because it couldn't find the network because it was dependent on a VM >> Wendle's most cursive routers. >> Exactly. This this [laughter] was like a cursed network setup. Um >> it's something we're certainly going to look into is expanding all of the different network connectivity options. um doing something like this with a with a wire guard or a you know OpenVPN or anything like that where you say hey I want to make sure that all my apps are separately tunnneled out because I need them to go different here >> then that's probably a case where you know we'll we'll we'll look into this like this this is one of the things where I can take it back to our engineering team and say hey there's a lot of people who want to do this like you know apps inside a >> apps tunneling in there Vince giving me flack about jails with vnets hey we that LXC containers, you can do that all you want there, too. You want to do LXC's, you want to do VMs. U this is specifically people want that Docker app stuff in there. Yeah. And so that requires a little bit more plumbing if you want that pointand-click just works kind of stuff. Um but yeah, that's that's one of those things like I'm absolutely going to take this back to the engineering team and say, let's let's look at this because it was something that got brought up to Chris and I on our previous podcast. It's getting brought up here. There's clearly an appetite for it. So, I mean, let's let's let's go for this. I I just have to to call out the fact that Jack Sparrow is asking for uh specific startup for network mode applications. >> I I wonder why I wonder [laughter] why the the greatest the greatest pirate in the history wants to know about network configuration. >> But you have heard of him. Um yeah. Uh Jack, one video that might be interesting for you is actually one I published just a couple of weeks ago on a VPN gateway. Um, and that is um a a server agnostic service that I like to run. It runs in a VM, but it runs an openVPN gateway with routing and firewall rules uh through IP tables in a bunch of VM that you can connect to a commercial VPN like a Nord or a Surf Shark or whatever else. Um, and it has a built-in kill switch. Uh so all you have to do is point your network traffic from whatever client device you want at this particular VM and that VM becomes your gateway device to the internet. Um if your commercial VPN is not connected, it denies all traffic by default. Um and so if you don't want things to access the internet, you don't want a personal history, you don't want any trackable data uh going through that, uh that is a fantastic way to go. >> Cool. Uh, got a got another one from American Cossworth. He says, "Uh, what I need is trunaz to be able to easily add discs one at a time. Not for me, but for my family and friends who look at me strange when I say Linux." Uh, well, good news is that you can actually add disc at a time into RAID Z1, RAID Z2. Now, you can you can do that now. It is truly one at a time. You can't add two at a time and expand to both of them. You have to expand one, let it, you know, rever it in, and then expand by the second one, let it reserver it in. It is truly disk at a time sequential expansion. So we we can in fact do that. Um >> 20 2410 we added that one. >> Yeah, that that was a uh an addition uh when open ZFS themselves started supporting that. Yeah. >> Um and uh I haven't tried it yet, but I actually have a VDE that I need to do that because my original VDEV was seven drives wide and I have an eighth drive that I would like to add to it. And so that's probably a video that I'm going to end up making here in the next month or so. it it is going to mess a little bit with your reported available space until you actually start using it. Um ZFS rewrite can help some of that, but again the challenge with ZFS rewrite is that it is going to basically inflate your space if you're using snapshots. Yeah. So that's that's the challenge. Um all that data has to go somewhere. >> Yeah, it's got to go somewhere. Um >> digging poking around here. Um we say uh Josh Hass here he says true as a VM still a good idea. Um, and people are saying, "Hey, let's >> this is another great myth. >> You you got to do it, you know, oh, you can't virtualize trunaz or you can virtualize trunaz." This is a great one to bust. So, and people are calling it out in the chat, which I appreciate, you know, doing it right. Use hardware pass through of your HBA. Um, so >> you know, pass that through there because what you really don't want is to obiscate any access to the discs hardware pass through using MMU. take your storage controller, whether it's a, you know, your LSI HBA, your SATA card, your onboard SATA ports, if you're using an NVME drive for boot, >> give that entire device to the VM and let it have exclusive access. Yep. >> So, the bonus to this, if you are running Proxmox specifically, here is your challenge that Proxmox speaks ZFS as well. >> Yes. >> And people say, "Oh, I just I pass raw discs. I don't want to isolate my HBA. I want to pass raw discs. The problem with that is that you can get into a scenario where both Proxmox and Trunaz with no awareness of each other can simultaneously mount your pool. >> Y that results in bad stuff. >> You basically get you get 32 transactions to figure out that this happened. And if you don't figure it out inside there, your pool is toast because you have overwritten your Uber blocks. You're dead. >> Yep. >> There is. Now, there is something that I I wrote this and it's a resource on our forums. If you go to the forums.trunaz.com and look up uh the proxmox and multihost variable. You can set a variable on your pool called multihost. And despite the name of it, turning multi-host on does not enable multihost. What it enables is a check for it. Mhm. >> So it will basically say, hey, if this flag is set, you need to do aggressive checking at import time to make sure this pool is not in use by another system. >> Yep. >> So turn on multihost. If you are doing this, if there is no way for you to do hardware pass through in Proxmox. If you are insistent on passing individual discs, turn on multihost. Yep. >> It might mean you're going to take 30 to 60 seconds longer to boot that VM, but what it's not going to do is put you in a scenario where you can double mount your pool. Yep. >> I would rather spend 30 seconds longer when I update, you know, twice, three times a year or whatever than I would going, ah, crap, all my data is gone. >> Right. Um, I did a tutorial on on disc pass through a number of years ago. Um, and I at the time that flag wasn't available. Uh, and I I don't think I mentioned that you could accidentally mount or import the ZFS pool because Proxmox has been ZFS aware for a very long time. Um, and so I didn't really cover that very well. I I said uh Trunass needs bare access to a disk. It needs raw drive controller level access to a disk in order to to make everything work properly. And so there's two ways you can do that. You can pass through an HBA. Here's your MMU. Drop your PCI card in there. Here's your HBA. Done. We're We're good to go. Uh if you can't do that, you can pass through individual discs and then have the TRNAS VM mount them individually as individual discs and then add them to a ZFS pool. That also works. >> But but once you create a ZFS volume, that volume is now importable in Proxmox. And if you accidentally go to disks or ZFS tab in Proxmox and then you click, oh yeah, there's a ZFS pool. Let's add that in. Toast. >> Bang. >> How quickly does 32 transactions happen on a ZFS pool? By the way, >> as as as slow as possible, 160 seconds. Yeah. >> If you don't write anything to trigger it otherwise, 5-second default timer, 32 transactions, 160 seconds. >> Exactly. just under three minutes. >> Yeah. >> And at that point, suddenly you get >> oops, we experienced corruption. Your pool is unavailable. Yeah. >> And we can't mount it. >> Yeah. Yeah. If you accidentally mount that pool, it's not even a menu option thing. You have 160 seconds to run to the back of your server and rip a power cord out >> pretty much. >> That's your only option at that point. >> Yeah. Yeah. Or or kill the VM. >> Yeah. >> But but then at that point, Proxmox's got it mounted and who knows what it's done to it. Um, you pretty much you got to pull the power plug, >> which is amusing because I actually saved somebody's pool who now I'm not going to name them. I'm going to protect the guilty here. >> Customer they were using Claude to code some scripts for cleaning up certain things on their trunaz pool. So all you AI haters, go ahead and sound off in the chat for this one, saying he deserved it. Um, and this script was great for saying it was going to clean up all of these like Postgress DBs and stuff that weren't being used by his apps. >> Yeah. The challenge was this script was expecting a variable to be set for the path to his directories for the postgress before it issued the wonderful command to of course remove that lang you know remove the French language pack right rm-fr that variable was not populated on his system when he ran it so he effectively ran rm rf from the root of his drive. He got about three screen scrolls into it before he panicked and pulled the plug. >> Yeah. >> And came to the forums to say, "Please help. What have I done?" >> Yeah. >> And I said, "Well, you are pretty far along the FA curve. [laughter] We're going to hopefully prevent you from climbing up the FO axis, >> right?" Well, the FO is a downhill slide. >> Yeah. Well, it's it's it's the more you f around, the more you find out. And with ZFS, this is an exponential curve. Yes, this is not linear. Yeah, >> you can f around quite a bit before you find out. It will protect you from a lot. >> What we and everybody's had. >> Once you cross that summit, >> there's one way to go. [laughter] >> Bye-bye. So, what we actually did was I said, "Take out your boot drive. You're going to install a new copy and you're not going to import your pool." We actually manually wound him back. I had him dump the labels from his disc and say, "Which of these timestamps is before you hit the enter key on that ill- fated command." >> He found a time stamp. I said, "We're now going to import your pool readon at first at this time stamp." And this is basically you have that ability with ZFS. Yeah, >> you can walk those 32 Uber blocks back and those are all valid entry points for your pool. We walked him back to before he ran that command and had to pull the power plug and I said, "Mount, read only. Go look in here. Check your data sets. Are they there?" Oh, good. They're present. Okay. Now, you're going to run this command and you are going to sit there while your system chews on that for several hours. [laughter] He did that. He rebooted one more time. All of his data was back because it had not successfully flushed it out of the history yet. Yeah, >> that was probably one of the best recoveries I'd ever done. >> Yeah, >> because it was like you you just basically RMRF your root. >> Yeah. >> And and you but you still have your data. >> Yeah. >> That that's a testament to even without But you better believe he set up snapshots that same day after he got his data back. >> Yeah. [laughter] >> H that was a good one. Um, yeah. So, we got a couple here. A couple donations from uh the PC archive. We got two $5 donations. Uh, greatly appreciate. He says, "Listen, I'm a >> Yeah, I'm a huge ZFS nerd. My main server is a minis forum MSA2 with an external HBA and his backup trust is his first PC case from 1998. Um, [laughter] is it nice and beige? I don't know if it's I don't know if you can see it in the frame here. Uh, there was a little Dell I think it's just out of frame of me, unfortunately. I have a P3933 that is sitting beside or behind me there. >> One of my co-workers has that exact system. I have worked on that so many. Is that a Rambus system? >> It is not. It is RAM. You got the DDR or SDR SDR. It is SDR. It has a whole 128 megabytes of RAM in it. So, >> he had he had a P3 933 that was a Rambus system. >> Really? >> Yes. >> Really? I thought they only came as in the P4 era was when RD RAM showed around. >> Nope. >> Yeah. Well, we are uh we are happy to keep you company on your night shift here. The PC archive. Cheers for that. Um even though you can't join us in the drinking, hopefully. >> Depends on where you work, I guess. >> Yeah. Yeah, that's very possible. [snorts] Uh we got another $10 from YouTube sucks so bad. Uh hopefully they don't demonetize your stream for this, Jeff, [laughter] with us saying that. Uh to be clear, reviewer, that is the gentleman's username. We are not implying that you're bad. We do in fact enjoy this. Uh he says, "I could just ask True now support." Oh, you kind of are in a case. Uh I figured it'd be good for viewers to know besides virtualization and containers, what are some differences between Trun's core/enterprise and Trunaz scale? So little nomenclature thing here. Um enterprise is the umbrella term that encompasses both Trunaz core and scale when you put it on our hardware. >> Right. >> Uh so you're saying systems hardware >> on ex systems official hardware on on trunaz hardware because we officially do business as the true naz brand name now we don't even call ourselves i systems >> okay >> um >> good to know >> but if we say if you're running true enterprise 13 that is the shared code base with core so you're running the free BSD version >> um if you're running truness enterprise >> 2410 you are running the truness scale codebase um the difference is well obviously core is BSD Dbased, scale is Linux based. That's the the main piece behind it. And with that comes all the other um you know services and the middleware pieces that need to show up in there. So instead of you know like um >> uh the CTL driver for ice scuzzi on BSD has been replaced by SCST on the Linux side. >> Mhm. >> Um we've got the different kernel mode SMB drivers on you know one and the other functionality differences. I mean, you've got the, you know, the uh NVME target driver on the scale side and you've got the absolutely nothing on the BSD side in comparison. So, >> yeah, it is it is uh yeah, trunaz community and trunaz enterprise is now how we divide the two and you know, it's it's all Linux based now. Um we are not continuing to issue new freebs based versions of truness, >> right? Uh, speaking of BSD though, Vince chimes in with another $5. Thank you very much. Hey Chris, want to have a long discussion about all the oddities I found lately with ZFS labels while building a custom OS image builder from scratch. You guys had quite the conversation over the last week. >> We We did. And now I'm trying to recall the exact nuances of this one. Yeah. Um, yeah, that that one it had to do with when you were So, what I understand you're doing, Vince, this has to do with your uh, you know, Raspberry Pi based time server that you were building. Um, you know, go ahead and stop me if I'm sharing too much here. Um, but basically, you're using a ZFSbased boot volume on that and you are you were basically saying it's kind of like it's a it's a pre-baked build pool or a boot pool with a set gooid on there. And the way you are reformatting this is basically you're dding that image to an SD card which it would then have to sort of expand in place to do that. The problem is when you put it on there um you know you started with your small image of whether it's 4 gigs, 8 gigs, however your your boot image is and then you expanded it to grow. The way ZFS works for those who don't know is it actually stores four copies of your disc label. There's two at the front, >> there's two at the end. So what was happening here was you have two at the front and two at the end of the 4 gig partition. You then expanded it all the way and you would want to put them down here. But what happens is if you've already used this SD if you used this SD card before >> you got two here, you got two here from your old version. So you got four here. You now have six labels. They all have the same pool ID on them. Yeah. >> So when ZFS sees that live, it sees four labels that are from a time stamp probably in the past, >> two labels that are newer at the end because this was a previously used card and you reimaged from a release that was a month ago. >> So all of a sudden you've got six labels. Yeah. And they don't agree. >> The one thing ZFS hates more than anything is ambiguity. It is an atomic file system. It likes true and false, ones and zeros, yeses and nos. There is no in between, there is no maybe. When ZFS sees a maybe, >> it panics. Yeah. >> Like literally, it will go I am not touching this pool. >> Yep. >> IO error pool suspended. You have an ambiguous configuration. So what what we ended up having to do like how >> which is great from a data preservation standpoint data >> from data integrity exactly what you want. >> Yeah. But you end up with these really weird edge cases where you're trying to expand it. And the key is if these were if it was different pool IDs every time. Wouldn't be an issue. >> Yeah. >> But it's the same pool idea. So all of a sudden you've got like you know four here and two here and ZFS gets confused. Um like the the way you can do that is say you go all the way to the end and you nuke the end of the disc. Yeah. You you left a hole at the end of the disc. >> Yeah. >> That's how you solve it. And that's how we solve it in our middleware as well. We will say, "Hey, if there if we see ZFS labels on a disc and we're importing it and making it part of a pool, we're we're gonna we're going to swat those out first before we bring it in just so we don't have weird things like that happening. >> We're we're going to take the entire table of contents and we're going to assume zero >> just in case." Right. Exactly. And we put up and we put up lots of warning flags and red dialogues to say, "Hey, you're adding this into your pool. We're going to erase everything on that. Are you sure?" >> Yes. >> And you got to you got to check that confirm dialogue box. >> Yep. And uh when when importing new discs uh because I I reuse discs all the time here for you know it is a home lab essentially and so you know obviously I'm reusing hardware all the time for different projects and sometimes I don't erase my discs and I've got two discs or three discs from two different ZFS pools that I'm trying to throw into something and it'll go hey this is an existing ZFS drive. I'm not gonna touch it. Uh because we don't know what you're doing. Uh and I'm like, "No, I just want to make a new RAID Z." And it's like, "Nope, I don't care. It has ZFS data. I'm not touching it." [laughter] And so a lot of times to create a new ZFS pool if I'm doing like multi multi- drives from multiple different pools, you will have to go in an F disk and clear all partitions. And I also will also go in uh I'll boot into Windows installer and I will load up uh uh dis utility and I will just format it with with NTFS just to make sure all the flags are gone because I have used fisk before but the table of contents is still present which means it still has those first two flags. >> Yeah, >> I've had that happen before. There there was actually something in um the lib block ID that had to do with finding spurious NTFS labels that was causing issues previously with ZFS. I'll have to dig that one up, >> but >> and and it was actually you had to go in there um and poke out the NTFS because Linux was trying to understand it. Yeah. >> Um we no longer have an NTFS driver in Trunz so it doesn't affect us. But back when we did it was like, "Hey, this is >> it was one of those weird things where like we're we're not seeing this, but our community is, so what's going on?" Oh, okay. That's that's how it happens. They took their old Windows gaming system, they put Trunz on it. >> Their Windows gaming system was running NTFS before, >> right? >> Yep. No, those two file systems don't like each other at all. uh there's flags set in various locations on drives that that if if they're detected, they just go no, I'm out. >> Um and so my my my usual modus operandi will be if I'm taking a drive that used to be ZFS, I will fisk it and delete all partitions and then I will dis utility it and just format it with one clean partition NTFS and then any of them will be able to wipe it and import. I that that's what I found is the the thing that works for me. >> Yeah. Michael Sertherland's got it right there. Disc part clean. That's the one. Select disc whatever. Disc part clean. >> Yep. >> Boom. >> I've had I've had clean still not clear uh ZFS flags from it before. >> I have >> It's a Windows one. It doesn't care about ZFS, >> right? It doesn't care about ZFS. It right Windows doesn't care about ZFS. But if I just take a ZFS disc and I and I disc part clean it, the ZFS the first two flags are not cleared off that. And if ZFS sees those first two flags, >> yeah, it's it's going to have a little moment there. >> Yeah, it'll have a little moment. And I've had I've had I've had issues where it goes, "No, I'm not I'm not going to touch this disc. There's still ZFS flags." I'm like, "I've wiped it four times. What are you talking about?" But then if you create a new Z uh NTFS format, it'll go, okay, this is obviously an NTFS disc. I'm I'm okay now. >> Pave over it. >> Yeah, exactly. >> Yeah. Uh, a couple more good questions here. Uh, so PC archive again, another $5. Thank you very much for that. He says, "My first PC is a beige compact and it had a 12 gig quantum Bigfoot 5 and a quarter. The chonker, the big guy." He says he backed up the original image of Windows 98 first edition. back when we were all tolerating Windows a little bit better than we do now. >> Yeah. Um I've I've only ever owned one 5 and a quarter drive and it was also with a compact about that same era. It would have been a pennium 2 like a 475ish 400 500ish megahertz. Um yeah, it was it was a quantum Bigfoot. I think mine was an 8 gig. It wasn't the 12. Now, now I'm showing my age here, but my five and a quarter drive was an MFM RLL on an IBM PC XT. >> Okay. Yeah, I I can go XT hardware on you as well. You're you're not that much older than me. >> Um, it's Get Get off my lawn. >> Yeah. [laughter] [gasps] Uh >> um my my five and a quarter drive I was already actively working on computers and uh my neighbor had an old Pennium 2 system and he goes, "Oh, this thing's a piece of junk and it doesn't work." And there was some really simple fix with it. And I remember like he goes, "Do you want this thing? Like you you're smart with computers." And I went, "Yeah, sure." And I took it home and it took me like 20 minutes to fix it and get Windows installed. And uh and I was said, "Oh, it was just this thing." And my dad pulled me aside and he goes, "Don't ever tell people that. If they give you something for free, don't tell them you fixed it. just just use it. And I went >> just >> Yeah, >> it was it was so hard to fix. I put a lot of blood, sweat, and tears into it, but I get I got it kind of working. >> Yeah. I I told him, "Yeah, five minutes I figured out what the problem was, and now I have a new pennium 2 machine. Thank you so much." And he goes, "That's really rude." And I went, "Okay." [laughter] >> Oops. Lessons learned, right? >> Yeah, exactly. I still remember that. So, thank you, Dad. >> Um, Michael sends over $10. Thank you very much, Michael. Uh, no question. just throwing $7 in for a great episode. That's $10, though. >> Um although he took the 30% from YouTube. >> There you go. >> Cheers, my friend. Thank you so much. Someone's been paying attention to the rants that I have about YouTube sometimes. >> I I am running on empty, unfortunately. >> You are. Um I I never announced my second beer, which has been treating me nicely. Uh this from is from brewery gang, which is a very interesting uh name. OM E G- A N G. Uh, this is part of their Gnome Gang series. This is a Belgian style blonde ale clocking in at 9 and a half%. And it is fantastic. Um, >> oh boy. >> Uh, those for any fans of Belgian beers in here, those those uh those banana esters are doing some overtime here. Uh, >> excellent. >> That fantastic. I I am loving this thing. >> Okay. >> Uh Jay sends over another $5. Thank you again, good sir. Uh any exploration of official alternative parallel file systems to support scale out use since Gluster uh EOL uh asking for an HPC director who is also me um exploration. We'll say there is exploration happening. Anything further to that I cannot commit. We are always exploring. We were very disappointed that Gluster went away. Um but we are definitely exploring things. I can't commit to anything further than that because obviously stuff that we do always has to pass our own tests and our engineers are extremely picky. Yeah. >> About what they want to let in. >> Good. >> Uh yeah. Uh sneaking in another one here real quick. Um 89T Supra. Uh great car generation. By the way, the Mark III was uh you know underloved. Everybody loved the MarkV because they saw it fast and furious, >> right? >> Mark 3. Mark III fantastic. >> Mark III is still great. >> Yeah, lovely car. Uh he asks is will you ever officially support the legacy Nvidia drivers or we needing to buy a new GPU? I have a P4. Hey P4, you know P4 gang showing up use for image but after upgrading 2510 it is dead. So the what this what happened here is this is Nvidia's decision to use the open kernel module driver and cut off support for everything that lacks the GPU system processor. So everything older than the Touring the GTX 16 generation series GPUs. Um >> we had to make a tough decision at Trunaz to say we can only really ship one K mod with it. we can either ship an open or a closed >> and being open core open source we went we got to ship the open one it's the core of what we do and then also they then went and required the open one for black wall so the 50 series it was basically it was like well that's the way the wind is blowing unfortunately >> but >> yeah do do you support Kepler through Volulta and and and not support 5000 or do you support latest greatest highest generation and cut everything sub we cut off the true legacy so the Kepler stuff that was cut off a while ago and that's actually incompatible with the kernel >> um but I actually dropped a link in chat there um I'm running under my you know T3 podcast one I dropped a chat a URL in chat there uh somebody has actually [snorts] recompiled it send one more message because I have all URLs blocked by default so I need to add you to an approved sender so you can send URLs So, do like a test message. I'll add you to an approved and then you can send the URL. >> Okay. I just dropped it there as a follow-up as T3 podcast there. Um, but that's it's on our forums if you look for the uh NVIDIA compatible driver test for 25.10. Um, yeah, somebody has actually recompiled it and said, "Hey, I now have a uh CIS extension module that will let you use the closed source um mod." So that will bring your uh your Pascals, your Maxwells back to life. >> Yeah. >> In 25.10. >> Now, is he going to continue building this for every release going forward? He's going to build it for True House 26. >> Well, it's it's a community supported effort. So, we don't know. Um it's it's one of those challenges like it's, you know, it's it's kind of it sucks. It affected me, too. I had two P4s that I had to rehome and, you know, ended up with an art card. But >> it's kind of one of those things, like you said. >> So, yeah. Yeah. >> Like do we do we continue working the old stuff uh or do we have to you know unfortunately prune some of that and say you know this is the way Nvidia is going. >> Yeah. >> And you know embrace that >> kind of >> that that's that's part of the the yin-yang of also being not only an enterprise provider but also devoting to LTS releases is you have to go the way the wind blows. Um, >> yeah, >> you you can you can go, you know, close source driver, but you also run all of the implications that that comes with, which is no 5000 series support and and limited kernel availability as far as various versions of it as well. >> On on the flip side, it's like we want to be a fully open source company. We want to be, you know, have everyone that means we're going to run the open source driver. That means we're also going to run LTS. That also means some hardware is gonna be left behind because there's no longer an availability and that's just the way it is. And >> yeah. >> Yeah. It's it's unfortunate. It's kind of the the the way things had to go. >> Yeah. >> So I don't know. Let's let's not end on a down note. Let's find something else. Like I mean there's got to be >> drop your P4 into a Proxmox box. That's where it really shines. >> That's it's my mine runs Doom like a champion. >> Yeah. >> It's first thing I did. I was like, "Hey, let's let's get this thing running." Doom. >> Yeah. >> 100 120 FPS, max details, >> barely breaking a sweat. >> Yep. Um I I've got uh I I'm always the the pragmatist, I think, of of a lot of YouTubers, especially when it comes to gaming, because what is the point of building a gaming PC? It's it's to play games. it it I've never been one to, you know, pixel peep at every single image or why am I only getting 180 frames per second out of this thing or why is my low dropping to 90 or you listen to a lot of enthusiast channels and and they're like this is just unacceptable. 1% low of 90. No, I bought a 144 Herz panel. I want all 144 hertz. It's like well then turn off ultra settings and run it high. done. And I'm actually going to be leaning into that a little bit more, especially with hardware prices being what they are right now. Uh, as far as RAM, GPUs, storage, everything else. It's like, man, the return of the budget gamer has going to be a heck of a thing over the next couple of years because as we've uh we've seen with memory prices, I'm giving us a segue here in just a second. uh as we've seen with memory prices, um it's going to be bad for a little while. Um and and maybe figuring out what we actually need versus what we want versus what enthusiasts say we need is is kind of the right way to go when it comes to gaming, when it comes to even home lab stuff. Uh, I I didn't feel like I'd be leading by example, but suddenly I am at going back to an X79 DDR3 system for my main home lab. I've got a Xeon 6 system with a terabyte and a half of memory behind me. It's turned off cuz I don't need it. And and I'm I'm I kind of want to lead by example going like, look, if I can run my home lab for 400 bucks, anyone can run their home lab for 400 bucks. Uh if if if you want to play games, >> if you've got a 1070, a 1060, uh you know, a Ryzen for, you know, 1500, 1600 AF, whatever, play games. Let's go. It it it doesn't have to be, you know, you don't have to run at max settings, right? I I played for the longest time, like when I, you know, you know, we were all broke college students. I was playing on integrated graphics. >> Yeah. Stuff was still fun. >> Yeah, >> stuff was still fun. >> I I played so much Quake 2 and Quake 3 on an Inspiron 2400p with an Intel 815 integrated graphics card. Like, guess what? It ran at 640 by 480 and it ran like crap at that. >> Oh, yeah. >> But you know what? I played it and it was great. >> You played it, you had a fun time with your friends. I mean, that's what it was about. >> Yep. Exactly. And and I I I think we've gotten too much towards the if it's not a Ferrari, it's not worth my time mentality with a lot of PC hardware when people have been leaning developers in particular have been leaning so hard into the Steam Deck as far as compatibility. Do you realize the Steam Deck how pitifully slow that hardware is? Like really is compared to like discrete graphics cards? That's >> This is four times faster. And this is $65 on eBay right now. >> Yeah. It's It's four Zen two cores and what, like eight CUS of RDNA 2? >> Yes, precisely. >> It's Yeah. >> Yeah. >> But hey, look look at how many people play games on that and they have a good time. >> Yeah. >> That's the key, you know. >> Exactly. >> All right. Uh, segue because I'm so natural at those. Um, yeah, I I I warned about this in in a couple of previous videos, but man, the news just keeps getting worse and worse. And as I said in my last video, buckle in friends, it's going to be a long ride. And that is memory price outlook for Q1 2026 is sharply above expectations for what we thought even last month. It's not good. >> Yeah, I know the world is scary right now, but don't don't be bothered. It's going to get worse. >> Yeah. >> So, um yeah, you can see there, you know, the first quarter 2026, the conventional DRAM contract prices being raised to expected to go up 90 to 95% quarter over quarter. Um NAND flash going up 55 to 60%. So I I think you said it previously like we shouldn't expect a lot of relief in 2026 for this. >> Nope. Uh uh uh Ian Cutras who's actually in chat right now. Hello doctor. Um >> uh Tech Tech Potato. He's he's right over there. Um >> Dr. Cutras, I remember him from uh back in the Anen tech days. >> Exactly. Exactly. Um yeah. Uh he did a video uh earlier this week. uh he was on a podcast and talking about, you know, the RAM unavailability, storage unavailability. Um we're talking about RAM fabs that are selling all of their stock of RAM and all of their future stock of RAM, RAM that hasn't been created yet um on money that hasn't been earned yet to companies that haven't spent it yet. Um and and and that really sums up the whole picture. But it is companies like Nvidia, AMD, Intel, Samsung, Kioxia, like go down the list of any memory company buying up every every chip possible and every fab second possible. Um 2026 is sold out. It's February 4th. >> They areund They are 150% sold out. >> Yeah, it's February 4th. 2026 100% of allocation is going not to you. Um 2027 is almost sold out also. It's also not going to you. Yeah. It's it's a tough time to be a hardware enthusiast. So, you know, like you said, you're you're leading by example with running your home lab on X79 and DDR3. >> Yeah. Um, you know, like I I said I was going to do this myself is I'm going to take my my all SSD system. I'm going back to hybrid. >> Yeah. >> I'm I'm gonna I'm going to show people that this can work, that you can make things, you know, perform and sing and dance quite well without having to spend, you know, a a new car payment every time you want to add a drive to it, >> right? Um I I'm running my editing rig right now. Like I I have been editing solely off of a trunass SMB share for 7 years almost. Um and uh and it's worked fantastically. Um 100 gig is not worth it, but 10 gig absolutely necess necessary. Uh but if you can run 10 gig, >> you can do anything you want with it. Um and so what I have is a pool of four NVME drives in a single RAID Z. So three disk with a one disk failover. Um and uh uh that is my my active uh editing pool. Um that is where my cache lives. That is where uh all of my my raw files live. Those are automatically synced up to my my main pool which is an 8 disk array of 20 terbte drives. Um, if I need to access something from three months ago, it's on spinning Rust. And you know what? It plays at the same speed as my NVME because it's all sequential. >> Yeah. >> They're not editing off your, you know, your spinning discs, >> right? Um, and most of my things that I grab are 5 10 second grabs from a video that are 3, four months ago that are all on spinning Rust. And you know what my editor thinks of that? Cool. It's sequential reads. I don't care. I can still I can I can read this 240 megabit file at 10 gigabit. No problem. >> Exactly. Well, I mean it's it's almost like you're using a file system that works really well at, you know, coalescing all these small writes and doing predictive prefetch when you start hitting the first, you know, couple megs of a file. It goes, >> you know, he read the first four megs of this file sequentially. I bet he's going to want the next, >> right? >> Let's let's start c queue it up, guys. bring it up to the front. >> Yep. Exactly. And and so >> VIP lane. >> Yep. So I've I've got uh I actually misqued myself earlier. I I I said I had 256 in my true server. I've only got 128. >> Um and I've got 390 tab of of physical raw storage. Uh >> you're you're violating you're violating the 1 gig per terabyte rule, Jeff. >> Oh, absolutely. uh down to about 190 terbte of of after file system and redundancy and everything else of actual usable storage. Um it runs great like really really good. >> That's awesome. Oh here's a good segue from my side. So what what kind of NVME SSDs are you using in there though? Um >> what kind of workload do you hit them with? They are Gosh, because you know where I'm going with the segue here. >> Yeah, I'm I'm trying to remember which. So, I I have four two TB NVME drives and I I bought two sets of them and I can't remember which one I actually installed. I think these are actually the budget ones. The these are the Silicon Power UD90s. >> Okay. Um, I've I've also got a set of also budget Western Digital SN7100s, and I don't remember where those are at. Uh, but it's one or the two. >> SN7100s are good. >> Yeah. But they're they're both on like the slightly more budget side of Gen 4 NVME. >> But but I bet what they aren't is I bet they aren't the absolute most bargainbasement NVME DRAMless QLC from a vendor that looks like you rolled your face over your keyboard, right? >> They are not kingspec. They are not uh >> any of those ones like that. >> They don't start with a Q. Uh yeah, they're Yeah, exactly. >> Exactly. And that this was something that somebody poked in earlier in your Discord server talking about uh you know, they had a huge difference in performance on their sync thing install on their SSDs. So, this is one of the things where it it's it's not good enough to just buy any random SSD, >> right? So you have to you have to say hey if if you're doing a lot of sequential stuff you're not hitting them too hard m you know maybe QLC NAND is going to be fine for you because you get to write in the big chunks to them that QLC loves. Um if you're doing VMs and stuff that's where you need probably something that's a little bit faster. Maybe you throw something that's got a bit of that SLC caching in front of it where it says, "Hey, I'm a TLC drive, but I'm going to act like I'm SLC for the first, you know, whether it's 12 gigs, 20 gigs, or, you know, one/ird of my drive. I can do the whole third of my drive, which is how most of Western digitals like their their black line and some of the Samsung's operate. They will do the whole drive up to, you know, one-third capacity at SLC style." problem is when you run out of that single layer cell space, >> all of a sudden down it goes. >> All of a sudden you're at QLC, >> all of a sudden you're you're getting native nan speeds and on and on on early QLC that was actually slower than hard drives. >> Yes, it was. >> You were like 50 60 megabytes a second. >> Exactly. >> So people were people were hammering their new system with rights. They were loading their new pool going, "Oh, it's all it's all SSD. It should be screaming fast." They migrated over from their hard drives and they go, >> "Well, what happened?" Y >> all of a sudden I'm writing to 50 megabytes a second, >> right? Well, you only have 16 gigs of SLC. That's what's wrong. >> You tapped out your drives. >> Yep. >> Yep. >> Uh Ian also says, "If Jeff has a video idea, Kio will supply enterprise SSDs he needs." Uh, and I I've actually got a video upcoming um where a company offered me a system >> uh a dual a dual proc turin system, but they were only going to send a single M.2480 drive, but it had eight U.2s in the front. And I went, how am I supposed to review this without storage? And so I said, can you at least include some storage? And they went, we'll see. Um, and so, uh, I'm I'm looking for a storage vendor right now because there's nothing worse than getting a whole bunch of, you know, a a nice 2U box with like graphics cards and Turin CPUs and 512 gigs of of DDR5 and not being able to run any storage on it at all, let alone do testing. >> So, you you're you're kind of bottlenecked behind that. you got this little tiny, you know, 40 gig M2 and either that or you happen to go out to network devices, right? >> Right. >> I I know a great company that makes network storage devices, >> right? >> But it's it's not the same as local NVME, especially when you're talking about a you know, U2s or something, >> right, >> going on there. >> So, um >> yeah. Um, also I will I will state that I beat Lionus to the punch by a couple of months because I was at the uh the uh uh FMS uh >> Flash Memory Summit. Yeah, >> exactly. I I was at FMS last year where Kioxia introduced their 244 drives and I went, well, four of these gives you a pabyte and uh Western Digital happens to be launching a uh an NVME OF box with 24 of those. So you could fit a crap ton of storage in this. And what's funny is neither Kioxia or Western Digital had tackled that from a marketing perspective. And both of them went, "Huh?" And then all of a sudden Lionus is at the Kioxia headquarters going, "I've got four of these drops. I haven't had a pab of solid state in my hands." And I went, "You son of a bitch." [laughter] >> Yeah. Be like, "I did it first." >> Yeah. Yeah. Oh, that's how it works. Well, I was going to say if if you're looking for SSDs, I got one more little horror story to tell you. And this one was about uh you know, lies damn lies and SSDs that lie about their cash flushing capability. >> So, this vendor, this vendor has since chapter 11 themsself, and that's a good thing because when you get an SSD that lies about T10 spec, um I I want you out of the organization is what I want. Um, so this vendor, it had a drive that basically one of the very important parts about ZFS is your drives need to honor that cache flush command because hey, they're they're going to stuff all your stuff to your, you know, your spinning discs, your SSDs, and then they're going to send whether it's a SATA or a SCSI flush command to say, hey, anything that's in volatile storage, you need to put that on stable disc. >> Yeah. >> Put it on your NAND, put it on your spinning platters, put it somewhere it's not going to croak if the power gets cut. Yeah. >> Well, this this particular drive, it lied about it. >> It would accept it. It would say, "Oh, yeah. I totally flushed that. >> It would just do it asynchronously in the background." So, boy, did it look real fast. And for things that did, you know, were used to that. Like, you put it in a Windows system, it flew. It was great. Um, but boy, did I find out real fast when I was testing that one out in ZFS and I went, "This guy's not behaving like it should." Yeah. >> So, so I did the test and I I yanked the power out on it and sure enough it didn't finish flushing. >> Yep. >> So, I still have this one just as it's it's sitting somewhere in a drawer in my room just so that I can pull it out and be like, "This is an evil drive. >> All those logs are still just floating. [laughter] >> Yeah. Everything's just sitting there." >> Yeah. >> Yeah, >> man. >> Evil evil cursed hardware. >> Yep. Uh we got a couple more super chats to get to and then it is almost 30 minutes past the hour and so I think it's uh probably time to call it. So we'll get through these and uh call it a fantastic show. Uh let's see. Uh was it 89 Supra sends over five Canadian bucks? I don't think we we did the super chat for that. Uh thank you. I did try that patch and it works. I'm worried about its lifespan, hence asking for an official fist uh fix. Uh still have the Mark III, by the way. Excellent. Um >> excellent on keeping your Mark Tur turbo or non. I hope hopefully turbo because it's an 89T Supra. So I'm hoping that T is for turbo. >> Yeah, I'm assuming it's turbo. Yeah. Yeah. Um would you keep a non-turbo one or would you just strap a big snail on? I've never had a Supra, but I did have a 94 Celica, which is the Gen 7, and God, that thing was good. Uh, it was the it was the GTS version. Uh, it wasn't the Altra. That's the uh that's the GT4. >> Uh, uh, so this was the front-wheel drive, uh, version of the rally car. Uh, but it was the the 2.2 L non-turbo. [snorts] Uh, and that thing was hell of fun. Um, I've also had I've also had a Toyota AW30, which is the MR2 Spider. Yeah, that one's a >> chef's kiss mid engine. Everyone hates on the on on on the the third gen MR2. I love that car. >> Um, it didn't come in a turbo, but it didn't need to. >> No, that one was just it was just fun car. >> Exactly. >> Um, so in that one Yeah. So, I mean, the the official fix is unfortunately there isn't going to be one, right? Um, >> yeah, >> that that is what it is. >> Um, I am >> Ian Brun. Okay. Sorry, I was reading I am. No, Ian. Okay. The other Ian $5. Uh, I got a bunch of DDR3 for cheap NAS and AI servers mid last year. I wish I got more because the price has even tripled on DDR3. Um, I haven't seen quite tripled, but the price has increased even on DDR3. Um, if you're looking DDR4, I mentioned this in the last couple of videos. Um, where 32 gig sticks the last two years I've been able to get for like 25 bucks. Those are now $250 on the used market. Uh, 350 or more on the new market. [snorts] Um, DDR3 you could still get reliably a 32 gig stick for about 50 bucks. Um, >> but that's that's still more than it used to be. >> That's a lot more than it used to be. I You could get DDR332s for about 20 bucks. >> Yeah. >> Uh about six months ago. >> These are These are RDMs. These are RDS we're talking about. >> Yeah. Exactly. >> Uh the UDIMs were always expensive for DDR3. TCC UDMs were always a little bit pricey, but even the RDMs have gotten a little wild now. >> Yeah. Yeah. So it we're over a dollar a gig for a technology that peaked in 2017. So yeah, that's a little concerning. Um so we'll see where this goes from here. Uh cause worth $5. I snagged two two TBTE PNY XLR8 uh CS34 or 3140s for $240 after tax from my local grocery store uh otherwise known as Kroger, otherwise known as Fred Meyer. Uh >> I remember the I remember you posting in Discord about that saying, you know, hey, it looks like they're there. And I think it was pretty unanimous. Everybody said >> you go go back there right now and buy whatever they've got. That's a pretty screaming deal for this. >> Yeah. Yeah. 120 for a 2 TBTE even before the increase. Like you could sometimes find them for like 90 bucks, but usually they were 100 plus. So you've that's a great deal. >> Um and then Cossworth wants to send over another $2 and says Ford Escort RS Cossworth or bust. >> Yeah. Yeah. My my my irresponsible speed comes on two wheels these days. So there you go. That's where I'm at. Yeah. >> What do you got? What are you riding? >> Uh right now I'm dailing a 2002 Honda VFR 800. So the V4, >> yeah, >> nice one. Sixth gen, you know, that, you know, angry, irresponsible crossover point at about 6,700 RPM for the VTEC where the extra cams come on. >> Yeah, that that little bit of wrap wrap. >> Yep. Exactly. >> Little bit of fun there. >> Yep. Uh I never got my motorcycle endorsement, but that's not because I'm not a motorcycle rider. I'm a dirt bike rider. And so I I spent so much time for my first 25 years of life riding every back trail and and everything else either either on ATVs or on dirt bikes. And uh I I was going to go out for my my motorcycle endorsement. Um >> it's never too late. >> It's never too late. >> I've got three kids and and here's my problem is I I still have my 250 dirt bike. It's still in my shed and and it runs great and it fires up first stroke every single time and it's a it's a great little bike. My problem is the last time I was out riding um was about the time that I was thinking about like, hey, I should get a dirt a motorcycle and just commute to work on that because, you know, hey, 80 miles a gallon gas savings, whatever else. >> Um and the the biggest challenge with that between dirt and street is trees don't merge into you. >> No, they don't. Um, my problem was I was doing about 70 m an hour on a loose pack gravel road, fishtailing out the back end, having a grand old time, and the back of my brain going, I could definitely do about 85 on this. Like, not a problem. I've got I've got this handled no problem. And then the front of my brain caught up with the back of my brain and went, really? [laughter] That's your thought process right now? like the logical side caught up with the adrenaline side and it went, you probably shouldn't own any bike that goes faster than this, let alone on the street and and I I got back and I parked the bike and I haven't fired it up since. It's I haven't been on that bike in like eight years. Um, simply because of that. >> I'll probably get on it again at some point. Um, but uh I've had a lot of cool cars. I've had a a lot of fun. I' I've had snowmobiles. I've had uh I had a pair of 660 Yamaha snowmobiles at one point that are like the 0 to 60 and 2 and 1 half second variety. You know, >> two two-stroke 660 they go. Uh you're no longer riding them as far as much as holding on to them. >> Um I've had my fun and I like coming home more than anything else. >> Yeah. Yeah. Well, it is it is kind of one of those things where you decide and you go, you know, there's a certain point where you go, maybe I need to be a little bit more responsible. And >> yeah, >> this year is probably going to be the year my oldest slings his leg over a bike for the first time. So, that's probably going to be where I go, >> you know, I probably want to make sure I'm at something that keeps an even keel for the both of us. >> Yeah. Um, so I, like I said, I've had a bunch of really cool cars, too. I've had the I've had the AW30. I've had the the Celica. I had uh I had a 350Z manual roadster for a long while. Um they've all been great. Um I drive a Chevy Bolt EV now and it's also great. >> Welcome to the Talking Dads podcast. >> Yeah. [laughter] >> People know what they signed up for. Okay. >> Exactly. >> Exactly. >> Yeah. Well, anyway, a couple more quick couple more quick ones real quick before we Yeah, go for it. Uh Ke Kevin Neighbor says, "Do SAS drives run well on True NAS? couple of Z800s that I'm considering using. Absolutely. I mean, that's what we build that's what we build our enterprise gear on is all running SAS. So, yeah, SAS works great. Um, don't do multipath. >> So, yeah, if you're using an external cage and it's like, oh, you got you can run here to controller A, controller B. Uh, just run controller A. >> Don't run uh don't run multipath. >> Uh, not supported. >> Again, ZFS direct drive access, no expanders. And, uh, but on the on the Z800 >> SAS expander is okay. SATA multipliers bad. >> Yes. Um, >> and no RAID cards. >> I I do know the HP Z800 on that. It's It's like a four disk uh internal anyway. And so you're probably fine. >> Yeah. If if you're doing the if you're doing the internal like the Intel's whatever C Cer chipset SAS drivers, uh those are fine. >> Yeah. >> Yeah. So you won't you won't have an ability to throw multiath into that. >> Yeah. It's mostly if you're doing the, you know, external ones where you buy one of those, uh, shelves, you get a, you know, a used one from somewhere enterprise and they they're set up for redundancy. Um, that's, you know, do that with Enterprise, but for Community Edition, there's no multipass support, so don't set it up. You'll get weird errors about duplicate serial numbers and stuff like that. >> Yep. >> Uh, what else we got before we close? [sighs] >> Uh, SAS expander to going to a SATA because SATA drives work on SAS. Uh yes, but be mindful of your overall cable length when you transition down to the SATA protocol. So if you're doing something like a SAS RA, you know, an HBA and you're breaking out to individual SATA cables, uh be mindful of that max cable length for SATA, which I believe is 1 meter. Yeah. >> Um and that counts. So if you're trying to like >> do do one of those spider cables out to an external box of SATA drives, be mindful of the cable length. But if you're doing it in, you know, a typical rack mount chassis where it goes straight into a hard back plane and it's SAZ cabling all the way through, uh, you get a lot more tolerance. SAZ uses the higher signal voltage. >> Yep. >> Yeah. Yes. Being a dad is a good thing. Ian Brun. [laughter] >> I would like to continue being a dad. So, >> the walking dads. >> Yeah. [laughter] >> Yeah. HBA go into a back plane that has an expander. Yeah. All right, that's perfect. Um, just don't dual path it. >> You're good to go. >> Uh, if you have wideport support on your expander, uh, look that one up. That basically lets you get an X8 link from your HBA to your expander. Uh, yes, that's good. Uh, if it's two separate X4s, don't do that one. You know, that's a complicated thing. The the Walking Dads, says Dr. Katrris. Walking Dads. >> Uh, Triik says he's looking at the new RAV 4 plug-in hybrid. Yeah, that's fantastic. Um, >> RAV 4 Pride. >> I I love Toyota's whole hybrid synergy drive. Um, we've we've got a 2022 Toyota Sienna. It's not the plug-in hybrid, but it is the the battery charge hybrid. Uh, so it's got, I think, 7 kWh battery. Um, it it's it's good for like 7 miles total or sorry, 20 miles total of of electric driving. But what it'll do is it'll kick on the engine when it needs more torque, more more power, whatever else. Um, in our full size Toyota Sienna minivan with all seven seats filled, plus a dog plus a soft bag luggage rack on the top with like 12 suitcases and and and everything else in it. We got 33 and 12 miles to the gallon going over the Cascades. >> Wow. So impressive. So up to 6,000 ft and back down to sea level. Like >> Yeah, I was going to say that's that's not flat driving. >> No, you're going >> Yeah, we got 33 and a half with that car literally fully loaded as far as it would possibly go. >> The only thing we could have done worse is adding a trailer to the back. >> Um, and it got 33 and A2. >> So that'll kill it because it's there it's there's not quite as much torque on those ones anymore. Yeah. With the hybrid. You got the instant of the electric, but it can't sustain it. >> Exactly. Yeah. It's It's got really good torque up to about 15 miles an hour, but after that it's meant for for highway chugging. Um, which is why it's so good. >> My last Sienna was a 2004 with their old 3 L now. >> Oh, yeah. >> Not the most not the most economical, >> but damned if that thing would start in any condition. It had been sitting in the middle of COVID for 6 weeks in the dead of winter. So, we're talking like uh, you know, minus 30 to - 40. >> Yeah. Um, three turns it started. Three cranks it started up immediately. >> There there's a reason the Lotus Six Siege uses that that block as as their mode. >> The ECU was so cold that it did not power on my instrument cluster and it would not let me shift out of park for about 30 seconds until the computer rebooted and started. [laughter] But the engine came on immediately. >> Yeah. >> And it said, >> "Okay, I'm running." >> Yeah. Let me wait till the rest of the crap around me has time to catch up. Okay, I'm running. But I'm running. >> Hold on. >> Please wait. >> Hold on. [laughter] >> One more minute. >> All right, you're good. And darned if that thing didn't do damn well everything I asked of it and more. I put I took the seats out, put a tarp in the back, and loaded loose rock in that thing for landscaping. Yep. >> And it did not complain. >> Yep. >> Fantastic vehicle. Yeah. Kind of miss it. >> We love ours. It's been fantastic. Uh, we've done 52,000 miles on it. We've we've done Southern California back a couple Exactly. Um, >> barely broken in. I hit I hit 280 on mine before I sent it off. >> Yeah. No, it it runs like a freaking top and we'll probably do 250 or 300 on it before we're done with it. So, uh, does Toyota Hybrid charge the battery or is it an engine drivetrain? Um, the Toyota Hybrid, uh, the way ours works is the engine will charge the battery while it's running. Uh and it also has regenerative braking. Uh so it does both. Uh in ours it is essentially a three motor setup. There is the ICE engine itself, the internal combustion engine which is a 2.4 L uh forbanger. Um there is a front electric motor and a rear electric motor which both drive equally the front and the rear wheels. Um and the electric motors are all obviously electrically driven. Sometimes uh specifically at low speeds sub 10 m an hour 100% of your drivetrain is coming from the electric motors with the engine as a battery charger. After 10 m an hour the engine tends to kick in and and can take over propulsion of the front wheels. Um so your rear wheel is 100% electric. Your front can be either electric or or gas or hybrid. Um and uh while the engine is running it is obviously charging the batteries. It runs great. Um, it's a little bit weird to get used to because it's the it's the CVT from hell in which your engine always runs at 2500 RPM whether you like it or not. And so you push on the gas and your car goes and just stays there. >> Um, and you're accelerating or decelerating or not. It it doesn't matter. The sound is the same. [laughter] >> It's just saying this is the optimal fuel consumption speed to run this engine at right now. You're going to run it >> and we're going to do we're going to do the hard work for you of translating that to forward momentum. >> Yep. >> Yeah. >> But the end result is 35 miles to the gallon. >> And that's a good thing. You can't argue with that result. >> Exactly. Anyway, uh you're a couple hours ahead of me, so I'm probably going to let you go. >> It's probably a good idea. It's about 20 minutes to midnight here and I have another I have another podcast to record tomorrow morning. >> Well, there you go. Yeah. Yeah. You're That's right. You're East Coast. Okay. >> Exactly. I mean, I'm on the East Coast, so it's a little bit late here. >> It's It's 8:40. I could go for another two hours still. So, and in fact, I will over on the afterparty in the Discord, uh, which you can get access to via patreon.com/craftcomputing. Uh, drop me a subscribe. Uh, every dollar helps keeps the lights on around here. But the act but the advantage is you get access to the awesome Discord community that hangs out there throughout the week. Like I said, there's about 1100 active members. Uh, even Honeybadger's over there. So, if you want to bug him even more, uh just hit him up uh at him and uh and we'll get some conversations going, whether they be BSD related or not. >> Exactly. And uh again, thanks for having me on the show, Jeeoff. I really appreciate it. Absolutely. Again, I'm doing my own podcast. Uh you know, we record ours. Ours is not a live one. Uh but I think you got it down in the description there, Jeff, for the Trunaz Tech Talk T3 podcast. Um come, you know, give us a a like, a subscribe, watch our videos. Uh we love to do all that technical stuff there as well. And we'll also tell you about, you know, what's what's coming in the future of True Naz and where we're going with this. We do have some exciting changes coming up in 2026. So, make sure you tune in. Um, you know, hey, maybe I'll have to come back on here when we drop some of these changes and we'll talk about what's going to come along with it, Jeff. >> Absolutely. Yeah, we we didn't get to the is there is there not a 26 talk yet, but uh may maybe we'll schedule that for another another couple of weeks. >> Maybe maybe another time. Watch this, you know. Why don't you go check out the blog over at Trunaz? Maybe. There you go. >> You know, we we I've been saying Trunaz 26 a lot on this show. >> Yeah, >> there might be a reason for that. >> Yeah, he's going to check that out. >> He's going, "Oh, no. It's official as of 26." Oh, wait. Hold on. Hold. Okay. Okay. 26. Okay. [laughter] >> 26. I I made sure we pushed that blog out before I got on the show. So, >> Exactly. >> Exactly. All right. Well, again, thanks for having me. Thanks everyone for tuning in. >> Absolutely. Anytime, Chris. Uh you're more than welcome on again. Uh and we'll definitely have you again. Uh, in the meantime, like the video, subscribe to Craft Computing if you haven't done so already, subscribe to T3 Tech Talk down in the video description. Uh, link to to the Trunass podcast is down there. Uh, Patreon, craftcomputing.store, all the extras that you usually do. Beyond that, I hope you all have a fantastic week and we will see you back here next Wednesday at 6 PM Pacific time for the next Talking Heads.

Video description

Thanks to Meter for sponsoring today's episode. If you're interested in learning more about how Meter can help with your IT Infrastructure, go to https://meter.com/craftcomputing to book a demo today. Welcome to Talking Heads, your once weekly show about everything happening in the world of Homelab, Servers, craft beer and cocktails. Check out this episode in Podcast form over at https://open.spotify.com/show/31ZxkU6RwPHG8A4jQjxSG3 Support us on Patreon and get access to our exclusive Discord server. Chat with all of the hosts from Talking Heads all week long. https://www.patreon.com/CraftComputing Want to fuel Craft Computing? Parts, beer, gifts? I've got a mailbox! Craft Computing 1567 Edgewater St NW, #51 Salem, OR 97304 Follow Jeff @CraftComputing on most platforms Follow Chris on YouTube@T3-Podcast On tonight's show... - Redaction from Last Week - Bazzite / GPD drama wasn’t as it seemed to be https://universal-blue.discourse.group/t/upholding-our-values-our-final-update-on-gpd/11594 - TrueNAS - Chris Peredun of TrueNAS, aka “HoneyBadger” joins the Talking Heads podcast tonight, busting some ZFS myths, and sharing insights on where TrueNAS and OpenZFS will be heading over the next year. We’ll be getting technical, so get ready for the alphabet soup of ARC, SLOG, PLP, and all kinds of WTF as we look at some “unique” hardware choices that have been made by community members. If “regulations are written in blood”, then hardware recommendations are etched in lost bits. - Tech News - Firefox adding an AI killswitch https://www.theverge.com/news/872489/mozilla-firefox-ai-features-off-button Buckle up, It’s Gonna Get Worse (DRAM/NAND) https://www.trendforce.com/presscenter/news/20260202-12911.html

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC