bouncer
← Back

ServeTheHome · 197.4K views · 785 likes

Analysis Summary

40% Low Influence
mildmoderatesevere

“Be aware that the 'buy now' urgency is partially driven by a sponsored narrative that favors specific hardware cycles, which may not apply to every organization's specific budget or refresh timeline.”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
98%

Signals

The video features a well-known industry personality (Patrick Kennedy) using natural, unscripted speech patterns and referencing physical objects in his environment. The content is highly specific to enterprise IT procurement and lacks the formulaic structure or synthetic cadence of AI-generated narration.

Natural Speech Patterns The transcript contains natural filler words ('well', 'you know', 'like'), self-corrections, and conversational phrasing ('beat up your sales rep', 'up in arms').
Personal Branding and Context The speaker identifies as 'Patrick from STH' and references specific physical items on set ('AMD processors on the set that AMD sent').
Industry Expertise The content discusses nuanced procurement strategies (quarterly budget cycles, quote duration changes) that reflect deep domain knowledge rather than generic AI summaries.

Worth Noting

Positive elements

  • This video provides highly specific technical data on memory channel density and PCIe lane distribution that is genuinely useful for data center architects.

Be Aware

Cautionary elements

  • The use of 'revelation framing' regarding market pricing creates an artificial sense of urgency to purchase specific hardware immediately.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-08a App Version 0.1.0
Transcript

Buying servers in 2026 is tough. DRAM pricing has gone up, so people are spending seven or eight times as much as they did this time last year. Plus, NAN pricing is going up, so even SSDs cost more. And the worst part is probably that there's really no end in sight, at least for 2026. But there are still some things that you can do to actually get a decent value in servers this year. So, well, that's the topic of today's video. Hey guys, this is Patrick from SH. This is a little bit of a throwback for us, but we're going to do a buyer guide on things that you can do to optimize your server purchases in 2026. We're going to go through some strategies that you can use to optimize every dollar that you spend in 2026 because the fact of the matter is if you need servers this year and maybe even next year, you're going to have to be smarter on how you spend your money. And that's exactly what this guide is going to go do. Now, really quickly, you're going to see that we have some AMD processors on the set that AMD sent. have to say that AMB is sponsoring this video. But of course, this is going to be focused more on how you can go and save money on servers in general. And today, we're going to talk about five main strategies. The first one is going to be buy earlier rather than later in this year. Two is going to be rethinking your node strategy because a lot of folks are using antiquated ways that they think about servers. Three, of course, is going to be to do some math on licensing because that is a huge area where you can save a lot of money these days. The next one is being wise about how you purchase DRAM and deploy it in servers. And of course, the same goes for how you deploy NVMe storage as well as other devices in servers and the way you think of those relationships. And the fifth one you would think is obvious, but we hear about it all the time, and it's to simply stop wasting money when you purchase servers. With that, let's talk a little bit about the first step, and that's really the timing. In 2026, you want to buy your servers early. But for many people, it's a little counterintuitive because, you know, folks have quarterly budgets or annual budgets at work. And a lot of times, the idea is you wait until that last minute so you can go beat up your sales rep and get a couple extra percent off in terms of discounts. And by the way, that's something that folks have been doing in the industry for like forever. But at the same time, that strategy probably won't work in 2026. Now, there are still some cases where going out and buying at the last the very end of the quarter makes sense. If you're going to be light on the amount of NAND and light on the amount of like DRAM that you need in a server, then of course you can probably go and push towards the end of a quarter, end of a month, end of a, you know, year to go and get a bigger discount. But if you are going to have a lot of memory or you're going to have a lot of storage or both, well then that's probably not a wise strategy. If you're DRAM or storage heavy, just go get your order in as fast as you can. And I know that sounds crazy and people are going to be up in arms, but there's a couple reasons for that. One, DRAM pricing, like I have not talked to anybody in the industry that has told me like, oh, the end of 2026, DRAM pricing is going to get better. So, one strategy you might normally do is just try to lock in a discount, make the quote valid for some time, and then order against that quote later. But the a lot of the resellers and OEMs have gotten wise to that. So, you'll see that the quote durations have shrunk. And one of the big reasons for that and the way that the reason that used to have longer quote durations is like when prices of components go down over time, having a long quote duration means that if somebody gets a quote, they say, "Okay, I finally, you know, I want to go purchase at this price." If they wait like, you know, 30 days later or 90 days later or something like that, the price of those components in, you know, previous years would have generally gone down. But nowadays, with those prices going up, you're going to see those quote durations get a lot shorter. And a lot of times nowadays when those quotes expire, they're getting repriced at a higher price rather than a lower price, which is very different for the industry. And folks are seeing like just in a matter of a couple months, they might see like a 30% increase in their cost of servers. And that's just a lot. And people are not accustomed to it. The other thing is that just from a STH perspective, we're literally going out and buying everything that we need for 2026 and early 2027, or at least all of the stuff that we forecast that we need like now. We're not waiting till the end of the year. we're pulling all of those purchases in. So, number one is go figure out what you need to purchase this year and figure out how early you can buy it because you're going to end up getting more bang for your buck the earlier in the year that you are. The second big one is rethinking your node strategy. Thinking about single socket and dual socket and actually not just buying dual socket just to because that's what you always bought. Really thinking about should we use single socket because these days there are good reasons to use single socket versus dual socket. Now, the first one that is so important and that I can't tell you, I don't even know why it's still a thing, but we still get to ask this at least a couple times a year. Are two sockets there for redundancy? Or do you lose redundancy by getting a single socket versus a dual socket? And that is completely false. You don't have two sockets in a server, two processors in a server for redundancy. They're there to expand the capabilities and really scale up within that physical server chassis. And the benefits are pretty straightforward. One, you can get more cores and more memory and more IO in a dual socket server versus a single socket server. There are some caveats we'll get to in a little bit. You also can increase your memory bandwidth by going two sockets, which is always nice. And on the cost side, there are benefits to getting a two socket server. Like for example, a big one is you get to share a nick and also potentially a switch port if you don't have a high networking requirement. You know, you can only have one nick for two processors instead of having two for two processors. You also of course get to share the things like the chassis, the power supplies, boot drives. So there are definitely reasons that you might want to scale up to two sockets, but it's definitely not the case today where you have to go two socket. In fact, one socket in a lot of cases, even for like big hyperscalers, is a lot more attractive because it offers some pretty significant benefits. First off, let's talk about core counts in a single socket. It used to be that you would only have half as many cores. that was a big deal cuz maybe you didn't have that many in a server. But now you can get up to 192 cores, 384 threads in a single server. And there are a lot of folks that think that that is too much for them. So if you don't need more than that, then you can actually get as many cores as you need in a single socket. So it's not the case today where, you know, folks are like, oh, you know, I really need I need a couple extra cores. I can't get it. Most cases these days, guys, you can get a single socket server and be fine on cores. And especially if you're at like lower core count CPUs, like let's say you have 32 core CPUs that you're kind of going to do, you know, two sockets, 32 cores, getting a single socket 64 core makes a lot of sense because, well, let's see, you don't have to buy a second processor. You don't have the extra power consumption from the second processor. You don't have to go and pay for a more expensive motherboard to support that second processor with extra dim slots and what have you. And another interesting thing is that these days you actually get more PCIe lanes per CPU in a single socket server versus a dual socket server. So if you're just trying to optimize on getting the most IO you can per CPU then actually single socket tends to be better. And let's talk about memory for a second because one thing you might say is like oh if I have two sockets I can connect more memory. Isn't that true? And it kind of is true but let me give you an example of why it doesn't necessarily work like that. A great example of that is this processor in front of me. If you have a two socket server, realistically for most folks, you're only going to be able to have 24 DIMs. 12 on one socket, 12 on the other socket. That's because how that's how much room you have in a chassis. Motherboards are constrained by the width of a 19in server in most cases. And because of that, there's just physically not enough room to go and put 24 DIMs per socket. So, you get stuck with 12 DIMs. And by the way, this is the same for AMD Epic as well as Intel Xeon 6T900 P series. The one thing I will say though is that there are some AMD epic designs, really funky looking motherboard designs where like the CPU sockets are like offset and they do crazy things to be able to fit a full 48 dims in a two socket server, but most of the time you're only going to get 12 DIMs on each CPU. But if you don't have that second socket, then you have plenty of room to go and have 24 dims in a single socket CPU. So you could actually have the same number of dims with only one processor. And that means that you're not paying as much to go put extra processors in a system. Another big one is the reduced complexity because when you have a dual socket server, if you say only have one nick, that means that the other processor that that nick isn't attached to is going to have to go send data from that processor, the memory, all the way over to the other CPU socket and then out the IO and the the nick. And then when they get data has to go all the way back. And that's why you see oftent times modern AI servers will have a nick dedicated to each CPU. So that way you don't have to go through that socket to socket link. Okay guys, and let me get to just one of my pet peeves and just something you can do if you're coming from older systems where you always had physical servers and like you know a lot of people were running like Zeon bronze six core 8 core CPUs back in the day and you know that was cool back then but these days just go use KVM virtualization virtualize those suckers and save yourself a ton of money when you do a refresh this year. But that gets us to licensing which is our third topic. And by the way, we went into great detail with this last year. We really showed you the different CPUs and how CPUs have changed over the years with respect to how you can use them to optimize your license costs. And that's really why we have this processor in front of me. This is the AMD Epic 9575F which is like the hotness of a 64 core processor because it has a higher frequency allowing you to get benefits of not just having fewer servers but also maintaining very high performance. And to recap that piece quickly, [music] the 2019 and 2021, like the most popular server CPU, I asked Super Micro and they said that the number one Xeon that they sold was the Intel Xeon 6252, which was a 24 core part. So, not only with the new AMD Epic CPUs do you get more features and connectivity, but even just on a per core basis, you get massively higher performance. We're talking about over twice the performance per core than you did in those systems from just like 5 years ago. And we use this exact processor to compare back to those Xeon Gold 6252s. And what we found was that it would take like maybe say I don't know around 136 or so cores, maybe a little more than that, but about 136 cores to be able to of the old Xeon to equal 64 cores in the new processor. So although we talk about the 128 core 192 core processors all the time because they're big numbers, the fact is that if you're in the per core licensed like server realm, then this type of processor, a frequency optimized but still a high core count processor is like a godsend. And just in case we end up in a supply constrained world for server CPUs this year, just think about it this way. 32 core and 64 core CPUs like this one are definitely a big deal for the virtualization market. also 64 core, 128, 192 cores, those are ones that we often see in the AI server. So, there's a lot of, you know, push on those. But if you have like a 48 core server processor or maybe a, you know, 72 or 96 or just whatever it is that's not one of those numbers, there tends to be less competition for those parts because they're not the ones that line with VMware virtualization. They're also not the ones that are like really sought after for AI servers. And so just as a thought guys, that might be a good thing to keep in the back of your mind for later in 2026. And one other super important thing in 2026 is that we are going to get new processors. For example, AMD has, you know, the AMD epic Venice line, and we'll get into that when we can. But on the Intel side, the Intel Xeon Diamond Rapids processors went through a big change that we confirmed last year. We're actually the ones that broke the news that the 8 channel Intel Xeon Diamond Rapids is not coming to the market. Instead, Intel's focusing on their 16 channel one, which means that if you are going to be, you know, buying a, you know, 8 channel or a lower-end skew, you're probably going to buy the older or the current generation Granite Rapids. From a licensing perspective, Intel was going down a path where they only had [music] one thread per core, which means that if you are licensed on a per core basis, you obviously would want to have two vCPUs per core because, well, you know, then you get more vCPUs in your server. Okay, so now let's talk about buying memory and storage in 2026. And the first thing I always like to say is just you need to get the best bang for your buck when you start talking about memory cuz it's getting expensive this year, guys. And that means either getting the most performance or optimizing on lower costs. And if you don't know this, ideally in servers, memory is installed in one dim per channel because that's how you get the highest frequency and highest performance from memory. Another thing is if you're buying new servers in 2026 and you're replacing like 5-y old ones, you're probably replacing Intel Xeons because that's, you know, let's face it, five years ago, that's what most people were buying. And the other thing to remember is back then Intel was still doing this thing where they had like different levels of Xeon had different memory speeds that were used in the different tiers of their processors. Now, if you do just want the maximum memory bandwidth that you can per CPU, frankly, Intel Xeons have these MCR dim things that are like the early versions of MR DMs that we're all going to get later in more platforms. And those, you know, they cost a ton, but you can get more memory bandwidth from them. I will just point out though that if you are wanting to maximize your memory bandwidth that's really a like you know maybe first half of this year solution because once we get to the new generations of CPUs there's going to be way faster memory and you're also going to get 16 channel memory. I'll just caution you that if you do go down that path know that a huge upgrade is coming for you in a little bit. Another fun one that a lot of folks don't know is that when you buy servers a lot of times you can buy servers and they'll be populated with two dims per channel. When you do that, they're actually running their memory at lower clock speeds than a lot of times the modules are rated at. And lower clock memory actually costs less. So, going back to that bang to the buck in single socket versus dual socket, let me just kind of give you the lay of the land right now. If you're going from like one or really two dims to maybe eight dims per socket, then realistically going, you know, an AMD Epic or an Intel Xeon Granite Rapids SP, you're not really going to notice difference. Even the AP parts, you're not really going to notice a huge difference on those. But when you go from dim number 9 through 12, let's say you're just trying to get more memory capacity, you need more dims, or maybe you just want more memory bandwidth. Well, then the single socket epic can continue to keep scaling even with that single socket where you don't really get that if you know you're going to be in a, you know, Granite Rapids SP single socket. And that's important because if you do put 12 DIMs or, you know, say 9 to 12 DIMs in a Zeon 6700 series platform, you're actually going to downclock the memory because you've gone to 2DC mode. So, let's just keep with the Xeon 6700P versus AMD Epic single socket here. And if you are going from DIMs number 13 through 16, well, then it gets interesting because single socket Xeon, you would of course downclock your memory and you're still in a 2DC mode. With the AMD Epic, you need to have a 24 DIM single socket platform, but then you would also downclock because you're going over 12 dim. So, you're going to two dims per channel. And of course, if you're on a dual socket Xeon 6700P series, well, now you've had to go pay for that second processor, but you can continue scaling up to 16 dims. Then when you go through dims 17 to 24 in a system, well then you can still do that on single socket epic as long as you have the right server with enough memory slots. But on the Granite Rapid systems, you're going to be downc clocking into DPC mode again. Today, if you want to have 25 to 32 DIMs in a server, you're probably in a dual socket 8 channel platform. So you're running a two dims per channel mode of course later this year when we have 16 channel server CPUs then that'll be you know that'll just be one DPC of course and then if you want to go over 32 dims in a server you could either go to like a quad socket I guess like Intel Zeon platform or you can do a dual socket AMD epic up to from like you know 33 dims all the way up to 48 dims. uh you could do a dual socket AMD epic platform like one of the funky ones that you know is a little offset and all that kind of stuff just to give you more dim slots but technically there are platforms out there that have 48 dims which is insane and the whole reason for doing this is just to give you the information so if you are buying a lot of memory this year you can make sure that you're making the right tradeoff between capacity cost and performance and on the SSDs this is another really interesting one right because nowadays you can buy huge SSDs like 5 years ago SSDs just were nowhere you know we're not we don't have 22 TB or 256 TB SSDs like that wasn't a thing a couple years ago, but now it's uh you can definitely go get them. But the other thing is that, you know, if you're refreshing, especially on a 5year cycle, you can do a lot more with today's servers versus what you were buying 5 years ago. Like for example, if you add a first or second gen Intel Xeon scalable because those things went everywhere. Then you only have 48 PCIe Gen 3 lanes per CPU or 96 PCIe Gen 3 lanes per server. And guys, why that matters? Let's take like a modern processor. you have like say 128 PCIe Gen 5 lanes, right? Just to get the same number of lanes, you probably needed three of those LGA 3647 processors because well, you just didn't have enough lanes per CPU. But those lanes are also running at about four times the speed because we have PCI Gen 5 versus Gen 3, which means that you would need like 10 over 10 sockets or over five servers worth of, you know, those older gen 3 servers, right? to be able to have the same amount of bandwidth that you get out of a single socket today. And the actual consolidation ratio would be a lot better because you'd have nicks and all kinds of stuff in those old systems as well. But still guys, [music] I mean, just think about that. You can now do things with storage and other PCIe and these new processors that you just couldn't do before. You can have larger capacity in NAND drives. Plus, you can have more IOPS and more throughput in a server than you used to be able to do. So guys, I think my number one thing is like if you're going to buy NAND, one thing to think about is you can actually buy new servers and optimize the heck out of the CPU side and maybe memory side by just having [music] way more storage on the front and you know only doing like single socket or something on the back end. And that gets me to my number one piece of advice for 2026 and that's just stop wasting money. Like it or not, 2026 is going to be a challenging time for servers. which is also going to be exciting because we're going to have a new generation of server CPUs with more IO, more cores, more memory bandwidth, all kinds of new technology. But on the other hand, for a lot of folks, they don't necessarily need to go there first. That's really going to be, I think, the realm of most like AI and HPC type applications first this year and then everybody else is going to follow later on. And what that means is that if you don't need those like crazy levels of performance per server, then buying today's PCI Gen 5 servers and locking in the lower pricing because we're early in the year on memory and storage is always a good idea. And this is especially the case if you're not buying those 128 or 192 core CPs today, you're buying down in like the 32, 48, 96, 64, somewhere in that kind of lower-end range. And the other big thing I keep telling folks is that look, the NAND and DRM pricing, it just is what it is. So there is an opportunity this year to do something different this year. You're going to need to challenge yourself and really look at what you're trying to deploy. Deploying the same old platforms this year and just saying like, "Hey, I'm going to get another dual socket, you know, whatever platform that my sales guy always just says, "Oh, here's a new model. Go buy this." That's the wrong model for this year, guys. This year, what you need to be doing is doing a long and hard look at all of the cost of a system. Learn about this. we have this video so that way you can learn about some of the cost drivers in a server no matter what the pricing is gives you a framework to start evaluating servers which I think is super important and also if you're running servers with like 20% memory utilization 20% core utilization all that kind of stuff don't overspend on stuff right that's just the wrong way to do it this year and also deploying these like low-end physical boxes because you don't want to go and pay for like VMware license costs just use KVM virtualization where it's 2026 guys just do it the other thing that we will see later in this year is that you're going to need to build your business cases for your servers around Agentic AI. That's just life. And I know a lot of folks are, you know, get a little queasy about the AI spend these days, but at the end of the day, a lot of the AI agents are going to use CPU resources to coordinate, but also to go and do research, like go, you know, pulling a web page or whatever, right? That's something that is going to take CPU power. So, you're probably going to end up actually needing more CPU and more compute than you would think because there's a new application that is really driving that. And so if I have a key lesson learned here and what you need to know for 2026, you need to learn about your servers because at the end of the day, we're going to be using more compute. There's an application out there that is just exploding in the amount of compute that it's using. So deploying more compute is something that everybody's going to have to do this year. But at the same time, it's important to not blindly go and deploy the exact same thing that you've been deploying for years because we see like all the time folks are going out and they're getting new servers that have, you know, don't match their what they're trying to do. And it's just it it just drives me nuts. And if you're an ST YouTube watcher and you're like, you know, thinking about what you're planning for at work, please don't be in the CPU poor cloud by the end of 2026. So guys, hopefully this gives you some useful frameworks that you can use to have those discussions with your sales reps, with your teams, and with the rest of the folks in your organization. If you did like this video and you think some folks in your organization could also use it, well, definitely go send it to them. Send it to your sales reps. I don't care who you send it to, but just make use of the information that we have here. If you did like this video though, why don't you give it a like, click subscribe, and turn on those notifications so you can see whenever we come out with great new videos. As always, thanks for watching.

Video description

We go through some of the key principles for buying servers in 2026 given the pricing demands on memory, SSDs, and other components. Given both the pressures on DRAM and NAND pricing, you cannot afford to buy servers like you used to in the past. STH Main Site Article: https://www.servethehome.com/buying-servers-in-2026-buyers-guide-tips-amd-dell-hpe-asrock-rack-asus-supermicro/ Substack: https://axautikgroupllc.substack.com/ STH Top 5 Weekly Newsletter: https://eepurl.com/dryM09 ---------------------------------------------------------------------- Become a STH YT Member and Support Us ---------------------------------------------------------------------- Join STH YouTube membership to support the channel: https://www.youtube.com/channel/UCv6J_jJa8GJqFwQNgNrMuww/join Professional Users Substack: https://axautikgroupllc.substack.com/ ---------------------------------------------------------------------- Where to Find The Unit We Purchased Note we may earn a small commission if you use these links to purchase a product through them. ---------------------------------------------------------------------- STH Merch on Spring: https://the-sth-merch-shop.myteespring.co/ ---------------------------------------------------------------------- Where to Find STH ---------------------------------------------------------------------- STH Forums: https://forums.servethehome.com Follow on Twitter: https://twitter.com/ServeTheHome Follow on LinkedIn: https://www.linkedin.com/company/servethehome-com/ Follow on Facebook: https://www.facebook.com/ServeTheHome/ Follow on Instagram: https://www.instagram.com/servethehome/ ---------------------------------------------------------------------- Other STH Content Mentioned in this Video ---------------------------------------------------------------------- - CPUs and Databases: https://youtu.be/M-krHlX38cs - We had to redo our servers: https://youtu.be/luD4T-IPbxY - Confidential Computing: https://youtu.be/6Avj9VHaZgk - Building our 2025-2026 Studio NAS: https://youtu.be/dfx_nJ9uyBw - Touring the "Center of the Internet": https://youtu.be/dmUmx3c8eJs - Inside Dell's AI Factory: https://youtu.be/8BSA67BqWjw - Touring xAI Colossus: https://youtu.be/Jf8EPSBZU7Y ---------------------------------------------------------------------- Timestamps ---------------------------------------------------------------------- 00:00 Introduction 01:43 Buying Earlier Rather than Later 04:16 Re-think Your Node Strategy 09:05 Do the Math on Licensing 12:16 Buying Memory and Storage 18:07 Stop Wasting Money! 20:28 Key Lessons Learned

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC