We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Worth Noting
Positive elements
- This video provides specific, rare Linux-based (Fedora) benchmarks for the Intel Arc Pro B60, offering real-world power draw and transcoding data that is useful for niche home lab enthusiasts.
Be Aware
Cautionary elements
- The gap between the hyperbolic title ('The Future') and the creator's actual conclusion (that his current i5 CPU is 'perfectly fine') highlights how search-optimized framing can overstate the necessity of new hardware.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Transcript
This hunk of drama right here is the Intel Arc Pro B60. I've always wanted to try out one of these Intel GPUs and I was sitting there in MicroEnter looking at a much more reasonable option, but this this was sitting right beside it and I decided without doing any prior research to whether or not if it's an actual good idea to go ahead and try it out, figure it out for myself. And right here is a 3090, Nvidia 3090. And this thing is awesome. And the cool thing is these two GPUs are basically the exact same price. I got both of them for $650. That is $650 each. Not together. That would have been awesome. I had 26 different streams going on this thing at the same time before I started getting frames dropping below real time while the 3090 only had six. So TLDDR, if you are somebody who has a bunch of people on your Jellyf Plex server, this thing is awesome. And when it comes to gaming, this Intel actually kind of held its own compared to this 3090 here. We're going to be talking about it right after I thank the sponsor and where I actually got this Intel GPU. MicroEnter. Unfortunately for me, for a number of reasons, I'm in the Portland, Oregon metro area. We don't have a microenter, so I actually had to fly myself down from Portland, Oregon to the San Jose area to go to that micro center. And it was awesome. I've always wanted to go and it did not disappoint. They had entire sections dedicated to networking hardware, which is a home lab's dream. Walls of computer cases that I actually got to look at in person. They even had a whole section with Raspberry Pi stuff. I even picked up a Raspberry Pi mug. I had a blast. I do wish they had one in my area, but they did sponsor this video, so I got a little thing to tell you. While both of these are actually decent options for AI, at MicroEnter, they got the Nvidia DGX Spark and the RTX Pro 6000. some absolutely serious hardware. Or if you're like me and you want some much more powerful inference hardware, they carry these AMD stricks Halo powered machines with up to 128 gigs of shared memory. Super powerful and efficient for a home lab AI setup. They're pretty incredible. And if you want to dip your toes in and experiment nearly any of their gaming GPUs and this guy can handle AI tasks with no problem. And big news, if you're in Austin, Texas, I wish I was saying this about Portland, you are getting a micro center. They're opening up their 31st location later on this year. And if you use my link down below, you can get a free 128 gig flash drive when the store opens. Do note that's when this store opens. Don't try to show up during construction. They're not ready yet. And if you want to keep up with the latest in AI and tech, do check out MicroEnter news. And I've got all these links down below. So now the main kind of thing that I was thinking when I bought this GPU was to use in my home lab. I have been using this thing for O Lama and just kind of N8 automations, but I wanted to see if this thing could contend. And to kind of get a feel for how they're going to perform, I ran a bunch of different benchmarks. All of these were on Fedora. And one thing I noticed about this Intel versus the 3060 is it is new. So a lot of things aren't as supported as I would like them to be. So, we're going to start off with some of the server benchmarks, primarily focused on actual media and encoding, and that is where this ASRock uh Intel Arc really did shine. HEVC 4K encoding, the 3060 hit 68 frames per second, while the B60 was at 150, more than double, and that is with just a single stream. Now going from 4K to 1080p and this kind of gives you a good idea with like Plex transcoding performance. The B60 was 202 frames per second while the 3090 was at 155. One of the beautiful things about this newer card is it has AV1 encoding. So any tests uh comparing this is kind of meaningless because this does not support AV1. For AV1 hardware transcode, this B60 got 138 frames per second with the encode and 202 with the transcoding. So, if AV1 is something that's important to you, the choice is obvious here. Granted, we're seeing good numbers out of some of the cheaper Intel GPUs for this. So, uh, you probably don't need to get the Pro B60, but it is nice to have HDR tone mapping, 47 frames per second on the Intel when we got 30 frames per second on the 3090. And then the stream test, what we talked about in the beginning or the intro of this video, that is how many streams can I do simultaneously. Again, a great benchmark for Plex and Jellyfin. This Intel pumped out 26 streams before it started to fall under realtime playback speeds. So, an absolute beast. And this one actually has a hardware lock of five streams, which is not great. And even to get it up to five streams, I did have to patch it to actually get it above three. So, if you're going to try to get something beefy for media streaming for your home lab, Intel is a great bet. And I mean, it's quick sync. QuickSync is just known to perform very well when it comes to these kind of tasks. So, if you're looking for a dedicated GPU for your media server, Intel is going to be the way to go. Or even a powerful processor. I mean, I got what do I have in there? I mean, I got a 13th gen i5 that I mostly use right now for my media transcoding. And I have never noticed an issue even with um a couple different transcoding sessions at the same time. This is super overkill. I mean, if you have 20 users and you have a bunch of different sessions going at the same time, this thing will be your best friend. Granted, I mean, at that point, if you're actually streaming that much, your your limitation is probably going to be your bandwidth. That's going to be the true bottleneck versus something like this. And really, the performance of the Cintel is so good, but the power consumption is also considerably less. During some of these transcoding tests, this 3090 I clocked with a max of 210 watts, while this uh B60 maxed out at 107 watts, or when it comes to the power from the wall, I was using a little meter. So, half the power for dramatically better transcoding performance, pretty good. If you actually have a server running 247, consistently transcoding, 100 watts could be a huge energy savings depending on the cost of the power in your area. Now, when it comes to AI, sticking with the topic of power, when I was doing some AI tests, this thing had 395 watts for the max power output from the wall to the computer that I was running this on. And this Intel GPU had a max of 230 watts. not the actual GPU, the entire system. I believe idle for both of them was about 60 watts in the system that I was using. If you've been watching this channel for a while, you've probably seen this computer a couple times. This is my old faithful. It's an AMD system with a uh 30 3700 CPU in it, so kind of older. Got bunch of DDR4 memory in there, but I love it. Never going to get rid of it. Now, when it comes to using it for local AI, this is where the Nvidia absolutely shines versus the uh Intel here, especially considering actually getting a lot of these benchmarks that I was making to work. This card is too new. There's not much support for a lot of the models, a lot of the applications. It it was hard to get good benchmark results out of this thing. I couldn't get the uh the PyTorch uh XPU back into working on Fedora. There were Python version incompatibilities, missing libraries. It was just a difficult thing to do. So, I couldn't get image generation to work. I couldn't get the YOLO object detection working. Just running a running tests on Linux was not a thing that was really feasible. I did get this uh they have Intel has an application. It's their like AI playground which they have custom configured models already in there that are going to work and that gave pretty decent performance. So, I do have high hopes. This is a 24 gig card, so it can load larger models into it. I mean, I loaded GPTOSS. I asked it to like make a wiki or something and it started going and it stopped. I like, "Hey, what's happening?" And it finished the prompt, which was pretty cool. So, if you're looking to dive into AI now, you have a whole bunch of workflows already. You have preferred models, getting an Intel card probably isn't going to be the best idea. This is going to be something that you're going to have to wait a little bit for to get the full capabilities and everything out of. But in my little bit of testing with the Intel tools, we had image generation that worked really well and it was snappy. It had a video thing which was weird and didn't give me the results I was looking for. But it can do AI. It can do it. And as some of the back-end technologies get better and compatibility increases, I can see this being a pretty powerful contender when it comes to AI and local inference running OAMA things like that, which is generally my preference when it comes to my like automations and some of the things that I have going on like that. So, at the moment, I don't think I'm going to put this thing in my actual home server. I'm going to keep this 3090 unit mostly for the AI stuff. My 13th gen uh i5 is doing perfectly fine for the amount of streams that I have going on. I think I have about eight different people on my media server while maybe three of them are usually or not usually but at most three of them are streaming at the same time. So just the quick sync on the CPU for my use case is going to be fine. What I think I'm going to do is actually put this in my desktop computer. Or a better thing to say is actually use a desktop computer. I've been using this laptop for a while instead, but this is nice. So, I can keep this in my home lab and use this on a day-to-day basis because it does pretty good. Da Vinci Resolve, well, not not the best example. The 3090 did kind of smoke it. I threw in like a 4-minute uh b- raw file, had it did some color correction, rendered it out. this did it in or the 3090 did it in about a minute and a half while this uh B60 did it in about 2 minutes which was about the same speed as this like M5 MacBook Pro. So this is on par with a MacBook Pro while this used 3090 is just lightning fast. And then finally gaming and this is where I was kind of the most surprised. This B60 did pretty good. any prior research I did to actually recording this. Unfortunately, after I bought it, it said that this is like mid when it comes to gaming. And I mean, I was getting good frames. You can see on screen now, I'm getting like 90 to 100 frames at 1080p doing something like Fortnite. Getting good frames in other games that are pretty graphically intensive. And this one, I did the 1440p on Fortnite and I was getting like 60. So, when it comes to like daily driver usage and video editing, even light gaming, this Intel GPU seems to be awesome. And for how much uh VRAM it has in it, it's it's priced rather competitively. To get something with this much VRAM new right now is ridiculously expensive. So, the fact that you can get this much out of the price that I paid is super cool. And I'm also comparing something that I got like a pretty good deal used versus something I got brand new. Used at the same price as new. Used is probably going to be a little bit better. But as some of the backend technology for this GPU grows and develops, I'm really looking forward to using it consistently. And I mean, I just got it. I still have some more tests. Uh all the testing that I've done with it has either been in Fedora server or on Windows on a desktop. So, I think next what I'm going to do is I'm going to throw u either Fedora or Ubuntu uh Linux desktop on the desktop computer, use it in there, see how OBS and all that functions. So, if you're interested in an update over the or in a couple months or so after I've been using this for a while, please do let me know down below. So, with that, I'm really enjoying it so far. I'm going to use it as my primary desktop GPU, not Home Lab yet. If I got the cheaper one, I'd probably throw it in the Home Lab and use that as my uh main encoding GPU. Honestly, if I were to build a home lab right now, uh do it yourself from scratch, pick all the hardware, I would pick a high core count like AMD CPU and a relatively affordable Intel GPU specifically for hardware transcoding. I honestly think, and it's weird to say, that is the best combo for building out a little home server. AMD CPU, Intel GPU, perfect. With all that, do let me know what you think below. And again, there'll be links to MicroEnter down below as well. With all that, I do hope you have an absolutely beautiful day and goodbye.
Video description
Big shout out to Micro Center for sponsoring this video: https://micro.center/3b8d52 Shop AI Workstations and More: https://micro.center/44c80e Sign Up For a FREE 128gig Flash Drive: https://micro.center/dfa294 Visit Micro Center News: https://micro.center/6bbda9 I tested Intel's Arc Pro B60 against a used RTX 3090, both 24GB cards at roughly the same price. The B60 absolutely crushed it in transcoding with 26 simultaneous 4K streams at half the power draw. But the 3090 fought back hard on AI workloads thanks to CUDA. So which one actually makes sense for a homelab? Turns out, neither of them is going where I expected. All benchmarks were run on Fedora using the same system, same test files, same scripts. Guide I used for benchmarking: https://gist.github.com/TechHutTV/7f78ac0a9f70ad1435831839ea58483b BENCHMARK RESULTS System: Ryzen 7 3700X, DDR4, Fedora Both cards: 24GB VRAM All results: 3090 / B60 VIDEO ENCODING HEVC 4K Encode (fps): 68 / 150 HEVC 4K to 1080p (fps): 155 / 202 AV1 4K Encode (fps): N/A / 138 AV1 4K to 1080p (fps): N/A / 202 HEVC Quality (VMAF): 85.8 / 89.8 AV1 Quality (VMAF): N/A / 80.6 HDR Tone Map (fps): 30 / 47 Max 4K to 1080p Streams: 5 / 26 AI INFERENCE Llama3 8B Q4 (tok/s): 141.4 / 53.9 Llama3 8B Q8 (tok/s): 95.0 / 9.8 Mistral 7B Q4 (tok/s): 149.1 / 57.7 Gemma2 27B Q4 (tok/s): TBD / 17.1 SDXL Image Gen (sec): 10.1 / N/A Whisper small (RTF): 0.016 / 0.241 Whisper medium (RTF): 0.026 / 0.547 Whisper large-v3 (RTF): 0.052 / 0.701 YOLO v8n (fps): 172 / N/A YOLO v8s (fps): 162 / N/A POWER (from wall) Idle (watts): 68 / 68 Transcode Peak (watts): 210 / 107 AI Peak (watts): 395 / 230 🏆 FOLLOW TECHHUT X (Twitter): https://bit.ly/twitter-techhut MASTODON: https://bit.ly/mastodon-techhut BlueSky: https://bsky.app/profile/techhut.bsky.social INSTAGRAM: https://bit.ly/personal-insta 👏 SUPPORT TECHHUT (all links below this line will earn us commission) BUY A COFFEE: https://buymeacoffee.com/techhut YOUTUBE MEMBER: https://bit.ly/members-techhut —PAID/AFFILIATE LINKS BELOW— 🛎 RECOMMENDED SERVICES VPN I USE: https://airvpn.org/?referred_by=673908 📷 MY GEAR HARD DRIVES: https://serverpartdeals.com/techhut MinisForum Tablet: https://amzn.to/3SeMmds Beelink N200: https://amzn.to/3xZjeQs Raspberry Pi 5: https://amzn.to/4f3yUCN Q1 HE QMK Custom Keyboard: https://www.keychron.com/products/keychron-q1-he-qmk-wireless-custom-keyboar?ref=techhut ASUS ProArt Display: https://amzn.to/4i4cAKz 0:00 Intro 0:21 Price & specs 0:37 Benchmark results teaser 0:55 Gaming teaser 1:01 Micro Center trip 1:40 Sponsor: Micro Center 2:45 Homelab context 3:13 Server encoding benchmarks 3:44 AV1 encode 4:17 HDR & quality 4:25 Stream stress test 5:01 Quick Sync discussion 5:48 Power consumption 6:24 AI power draw 6:48 Test system 7:03 AI benchmarks 7:41 Intel AI Playground 9:02 The decision 9:50 DaVinci Resolve 10:19 Gaming 11:00 VRAM value 11:33 Future plans 12:03 AMD CPU + Intel GPU combo 12:40 Outro