HOLY $H!T - The FASTEST CPU on the Planet - AMD EPYC 9654
Thanks to Supermicro for sponsoring this video! Check out Supermicro's H13 System Portfolio, powered by AMD EPYC™ 9004 Series Processors, at geni.us/q3l95Jp
Fun Fact: In a 2-socket server, you need 4 threads pegged at 100% to get 1% CPU utilization.
AMD's new SP5 socket, and HUGE new Epyc Genoa CPUs are crazy! Using Supermicro's servers, we set a bunch of World Records on a bunch of benchmarks. Intel is gonna be sweating, unless their upcoming Sapphire Rapids platform can magically compete. Intel's Raptor Lake CPUs might win against Ryzen in the mainstream, it's a whole different story in the datacenter.
Discuss on the forum: linustechtips.com/topic/14737...
Purchases made through some store links may provide some compensation to Linus Media Group.
► GET MERCH: lttstore.com
► SUPPORT US ON FLOATPLANE: www.floatplane.com/ltt
► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/sponsors
► PODCAST GEAR: lmg.gg/podcastgear
FOLLOW US
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech
MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova
Video Link: • [Electro] - Laszlo - S...
iTunes Download Link: itunes.apple.com/us/album/sup...
Artist Link: / laszlomusic
Outro: Approaching Nirvana - Sugar High
Video Link: • Sugar High - Approachi...
Listen on Spotify: spoti.fi/UxWkUw
Artist Link: / approachingnirvana
Intro animation by MBarek Abdelwassaa / mbarek_abdel
Monitor And Keyboard by vadimmihalkevich / CC BY 4.0 geni.us/PgGWp
Mechanical RGB Keyboard by BigBrotherECE / CC BY 4.0 geni.us/mj6pHk4
Mouse Gamer free Model By Oscar Creativo / CC BY 4.0 geni.us/Ps3XfE
CHAPTERS
---------------------------------------------------
0:00 Intro
1:03 Socket SP5 Features
4:07 DDR5 ECC
5:26 Expandability
8:32 Power
9:24 Let's Fire This Thing Up
10:14 Hyper vs Cloud Servers
11:21 Put a 4090 "In" it
11:39 More Cores, More Problems
13:12 Epyc vs Cinebench
15:17 ycruncher
17:00 Phoronix Tests
19:33 Outro
I work in the HPC industry and can confidently say that the EPYC Genoa-X instance types on AWS and Azure are going to be a big hit. All of our major customers have been requesting access to these processors since they are blazing fast and have an incredible interconnect speeds. It's a lot of fun to be an early adopter of this amount of processing power.
nice 🙂
Been trying to plant a bug in my boss's ear about the same thing. He's been a big fanatic of Intel for such a long time, however.
Good to know the response is good so far 😊Not sure what kind of workloads you run but we are working on optimising all of our software to take advantage of the new CPUs so it should be even better soon!
blazing fast... just like their java script with the New framework released an hour ago🔥🔥🔥🔥🔥
It also helps that AWS charges 20% less for AMD-based EC2 instances XD
Comes in, breaks records, leaves. Absolute monstrous CPU's.
The production value is so so so great on this channel, shout out the camera and editing crew!
My first job working on server BIOS was actually on the AMD Genoa platform for Dell, it’s so cool seeing people work with it now that it’s public
Man, we are at the point where we could theoretically install an operating system on the L3 cache alone.
Someone has to do that like NOW.
Agreed, OS's should be embedded.
Well, we should be able to cram DOS onto it
can we run Doom off of just the L1 cache though?
@@avroarchitect1793 asking the real questions here
You know a dual CPU system is absurd when it breaks task manager.
No one except linus would be running windows on these computers. These would be linux machines.
Even the 64 core breaks task manager, not even speaking of duals
You'll find task manager in the corner at the local pub wondering where they went wrong in life.
It "broke" Cinebench too, that i never saw lol
The task manager is broken by armless Microsoft monkeys.
Always wanted see the best intro of techburners on youtube. Literally yours one always fits and rocks🤖🤖💥💥
Love this video, something magical using them type of machines for first time
This is the video that finally made me realize how big a 4090 really is. I actually started laughing when that comparison happened wow
For those looking, it's 9:16
@Derick D If you missed it then you must be blind.
It's comically large lol
At 12 inches it's huge, but the 3090 was actually 12.5 inches, so somehow even bigger.
@@Jimmy_Jones or maybe, like me, I didn't feel like watching the whole thing but when I saw the comment it intrigued me. So for others like me, there's your time saved
In regards to the RDDR5 peculiarities you guys noticed in this video - DDR2/DDR3/DDR4 were all 72-bit ECC. As you noticed, DDR5 is 80-bit ECC due to the DDR5 DIMM having two separate 32-bit subchannels. Each subchannel needs its own parity, and while it only needs 4 bits of ECC per subchannel, there aren't any 4 bit die structures so they get a full 8 bits of parity each. This means, yeah, a non-ECC DIMM has 8 DRAM chips, and an ECC DDR5 DIMM has 10. Previous DDR formats only needed 9 per rank. Since 10 chips is 11% more than 9 chips, there will always be at LEAST an 11% cost premium for DDR5 ECC DIMMs compared to DDR4 even if there is a per-bit price parity between DDR4 and DDR5 DRAM. Also, due to each DDR5 DIMM having an onboard VRM, DDR5 will cost more by structure. Eventually, DDR4 and DDR5 DRAM production volumes will flip, so eventually DDR5 becomes lower cost than DDR4. However, there's always going to be that cost premium baked into the structure. Also, there will be off-roadmap 72bit DDR5 RDIMMs designed for specific hyperscale customers who do not want to pay the 11% extra bit premium for full 80bit ECC. A 72bit DDR5 ECC DIMM does NOT have full ECC coverage, but companies like AWS who control their entire software stack have written their environment to be aware of this and just deal with it. 72bit DDR5 will not be available to general customers because most people won't understand what 72bit DDR5 is, would buy it expecting ECC support, and have ECC failures in production due to the nature of 72bit ECC in DDR5. To avoid customer fallout, these 72bit modules won't be available to most customers nor will they be advertised on websites or general product roadmaps. Secondly, you noticed that the registered and unbuffered DIMMs have a different notch. This has always been the case. You were not specifically comparing ECC vs non-ECC DIMMs in this video when you compared the key notch - yes, one had ECC support and the other did not, however, the key difference was one was an Unbuffered DIMM and the other was a Registered DIMM. There are no modern memory controllers which can support both RDIMM and UDIMM memory modules, so they are keyed differently. All registered or RDIMM modules are ECC, but unbuffered or UDIMM modules can be ECC or non-ECC. Consumer processors are all based on UDIMM. So, yes, you can have a CPU support both ECC and Non-ECC memory. Specifically, unbuffered ECC and unbuffered Non-ECC.
Thanks for the explanation. Where can I learn about all these and more? Very interested in DDR technology
Sounds like you know a lot about this. Very interesting explanation where I could understand most of it even though having almost no idea about this stuff otherwise. The 72-bit ECC module story sounds interesting. How does one learn about such a story?
Bro break it down. You lost me after i clicked
Fantastic lecture! This man should teach a class on memory topology!
Yes
Just incredible progress being made in silicon! Blinding speeds
Can't wait to see the updated video for Epyc 9754
I can imagine telling my 14 year old self, using Windows 98 on a 600MB hard drive, that there would eventually be a 96 core CPU
cool story bro.
They have come a long way since Rage 3d 8mb....
@@dobermanownerforlife3902 8 millibit?
the real question should be, can we install windows 98 on the L3 cache?
Biggest surprise (for me at least, at the time) would be the "multicore CPU" thing. We didn't even talk about "cores", since a package contained one CPU with, natch, one "core".
- “How large is your computer's RAM?” - "Three Terrabytes." - "I mean the RAM, not storage! You know nothing about a computer." - "No, YOU know nothing about EPYC Genoa" *pulls out personal server*
I hope those Terrabytes aren't backed by USTbytes tho, or they would probably generates lot of crashes
"How much is your cache?" "4 GB" "That's very little RAM" "No, I mean my server's cache is 4 GB."
remember the joke was about 128gb ram and we thought that was insane? yeah
this is like the shit we made up as kids just throwing around numbers when talking about pc hardware, where the numbers where so big it wasnt funny anymore but dumb. 😂
1/10th of 1 cpu has more cores than an average joe. Just insane
What I would have done is let the new server go through a complete reboot and then re-ran cinebench, reason being is that I think the performance might be taking hits from Windows updates and driver installs and such, you may find that if you let the new server do a few full re-boots the performance may improve a lot.
The problem is even a warm reboot can take a few minutes on these. Source: I have a Supermicro Epyc Rome in my homelab.
Seeing Linus talk in these server themed videos and go into all the details and his avid interaction back and forth with the viewers and the machines, makes me feel like a parent whose gifted their kids the present they wanted the whole year on Christmas and the kids enthusiastically explaining to me what it is and how it works and everything. lol
Back in my ISP/Datacenter days, we had a long standing joke about slow boots. Whenever something took forever to boot, we would say it was doing a SuperMicro. We would joke around and say that the reason why people buy 4 of their servers at a time is because they took so long to boot that there was a risk that another one would fail before the first one finished booting back up. Good to know that things haven't changed much.
Don't know much about servers but why does it take so long to boot ? Do all SuperMicro servers take a long time to boot ? A little off topic but I fell out of gaming/PC building for a while but my old AMD Athlon X2 system booted much faster than my current Ryzen 7 system (due to UEFI I would guess ?) You would think with faster hardware it wouldn't take so long to boot.
@@Gatorade69 in datacenter we use a special type of RAM memory with the integrated error correction code or ECC. With the introduction of DDR5 the ECC comes to the wide consumer market, the ECC technology is part of the default design of every DDR5 module. Now back to the question you asked - compared to the classic desktop setups based on the CPUs you've mentioned, within the ecosystems using the server grade buffered RAM with ECC we have a lot more stuff going on during boot time (as one might have guessed) - with the primary time consuming one being the so called "memory training". Now, that alone is a whole new topic with a ton of settings in BIOS and it prolly deserves its own chapter on Linus TT or Steve's GamersNexus... Anyway, memory training is a one-time event that's performed by the system on its first boot or subsequently after any significant or otherwise unique change within the particular system, during which the system sets up, tweaks and tests the RAM memory and all its advanced features like the ECC so that it works at its best. That takes time. Of course, in this video Linus showcases the latest CPU by AMD and also explains how there's a bug present in the microcode that causes the long boot times nonetheless and which AMD announced a fix for in the upcoming days hopefully.
@@MilanPutnik I worked for SoftLayer before IBM bought them for their cloud. We had super micro servers and they actually didn’t take that long to boot up in comparison to AWS servers. When I worked for a AWS manufacturing plant doing diagnostics, those were by far the worst. Lenovo had some shit boot times too. I actually prefer Supermicro boot times over Lenovo and AWS boot times anyday lol. Unless you have one of them 4us with 2/3TB of RAM then forget it.
@@Gatorade69 Cause the motherboard needs to check absolutely everything. The ram and the hard disks usually take the longest to check.
Makes sense. Thanks for the answers. I remember old computers used to take a while to check the memory when booting up. I also wouldn't guess that it would also check the hard drive. Servers usually have a lot of space and memory so I can see that taking a while to check 5tem on boot.
Was floored at the 4090 comparison, the fact that the card is the size of the power supply and looked like a tenth of the server rack, is insane.
Yeah man, that thing is insane. I bet theres ITX cases with less volume than that
@@musguelha14 micro itx, you're probably right!
It's called bad engineering. Boomer tech. I can wait for thin small nice gpu with 75 watt to play 4K 240fps. No cable needed.
Well, er yeah, obviously that would be great. Count me in. Then you just know an AIB partner in 2036 is going to push 800watts through that puppy to power the 80,000 shader units and the GDDR11XX. Until we hit some kind of ceiling or have some kind of sensible standard this shit is just going to get crazier and crazier. Unless there's some kind of architecture or engine that renders all that power obsolete. I started to get hyped for the Euclidean thing until I realised it was all voxel... But something. Just don't knock the people that made it all possible. These guys are legends. They made all this real. Like Dave Haynie and the guys at Commodore are my heroes.The whole 'boomer' shit is just insolent, infant bullshit to piss people off and get attention. Rise above that. You're better than that.
0:42 a 3 cpu system? that's really weird... I never heard of an odd number socket server til now
love the Djent background music in the beginning
Almost 20 years ago we were stunned with dual-core CPUs. It's amazing, what AMD is doing.
And that was a game changer.
@@flammablewater1755 Cerebras already sells a computer with 850 thousand cores on one processor. It's basically a CPU that fills the entire silicon wafer.
Yeah. Software, except few special cases, still can't do squat about these cores.
@@SaHaRaSquad and it consumes 15kW, you'll need a 400V 3x20A connection for just one as the whole system with that one CPU consumes 20kW
@@BH4x0r Which makes it incredibly efficient, consuming only 0.023W per core.
Competition indeed does breed inovation. Still remember when AMD was the underdog? Do you?
Don't forget that AMD still is the underdog in terms of market cap or revenue
my brother in christ, there are only 2 cpu makers
They still are, people who dont follow tech topics have no idea that intel was worse (in most cases) for 5 years. So yeah, we remember.
@@cyjanek7818 depends on what metric you look at. Performance? In the server space, absolutly not a underdog anymore. Market cap in consumer market? Yeah it's still a underdog tale developing.
Only because of Microsoft. There used to be _so many_ CPU makers.
love supermicro)) its like an old vw golf always ready for any kind of hardware to be plugged inside :D also dell systems are pretty cool.
Good luck for all and I think Parallel processors world together small processors are best about one big processor in servers and each computer .
To those complaining about a server booting in 15 minutes, there were IBM p series (booting AIX) that would take half an hour just for the POST, before loading the OS. With the OS booting and getting everything running, it would take about an hour. I had to deal with 7 of those in a test lab I worked at back in 2010-2016. They were not fun.
Probably built not having to reboot it often. Yeah I too hate long server reboot times while my users keep asking "Is it back up yet as I have work to do?!" 🤣
Huh. Stores I assist still have p615 servers running AIX 5.X and they take a bit less than 15 min to reboot 🧐 But these need to go 😭
@@RobBroderick44 Know what's more fun? Having those in a test lab, where they have to be reformatted and total OS reinstall about every 3-6 months. Wait half an hour for POST, then have a 30 second window to press F11 to get the boot menu to tell it to boot from CD, then another half an hour to start the OS installer. Yeah, I had fun with those.
@@dangingerich2559 One day if it is ever possible in your case switching to Linux and kexec'ing the kernel the reboot would allow you to scrap out the post time when the reason for the reboot is not to check the hardware. Also there were more and more patches coming in the past two years or even more as well as in the years to come for Linux to allow parallel cpu booting and greatly reduce boot time.
@@naguam-postowl1067 That's nice and all, but I (thankfully) am not in that job anymore, and am not dealing with IBM p series or AIX any longer. I don't plan on applying to any jobs that include such things, either. At the time, I had to stay with AIX as the OS because we were testing customer specific circumstances, to make sure their OS and software would work with our backup appliance. So, I had little choice in what OS we tested. I believe that customer was US government, too, so they had little choice in the matter, either.
You know you have a beefy CPU when your OS's Task Manager window shows CPU cores like it's a defrag.exe from the late 90s 😂
lmao that's actually true
lol they clearly just glued 4 CPUs together
@@pandemicneetbux2110 still beafy
@@pandemicneetbux2110 It's actually 6 16-core chiplets and an IO die on top.
@@pandemicneetbux2110 are you an intel engineer?
You had me at 96 cores....then you throw in that it supports 6 Terabytes of DDR5!!! My mind completely exploded after that!!! Now imagine those numbers on a video card.....when that day comes I'm upgrading.
Man imagine having to manufacture all those tiny pins, truly insane tech
The comparison between the PSU and the 4090 had me laugh spontaneously.
Is it me or is Nvidia taking it a little too far with size of the 4090?? I wont be able to afford one for many many many many years, but i personally think its outrageously big :P
@@Emell09 it is outrageously big
@@etaashmathamsetty7399 😂😂
@@Emell09 I just built my dream computer (threadripper pro 5975rw 32 core and 4090 liquid suprim X GPU) and I used the HAF 700 case. The liquid suprim x 4090 looks small in the HAF 700 case.
@@2011blueman And play Minecraft with it, am i right?
The chaos level of those filenames is impressive. I bet LMG has strict workflow processes in place purely to ensure that Linus never gets the chance to name a important file.
[chuckle] Yeah. People generally don't think to make sure their programs handle "filename too long" gracefully. PTS was just trying to soldier on rather than aborting after the first failure.
@@ssokolow "[chuckle]" 🤓
have to be amd processor the great one advance machine german = amd advance micro device AMD EPYC
My naming skeem for personal files is mgdasfkjg, and then for work it is 'Datecreated,Project,Revision#,Maybeextadetail'
@@Myrskyukko ""[chuckle]" 🤓" 🤓
I was watching a java tutorial before this and had the playback speed on 2x. That intro bit was pretty funny in 2x.
I don't need such server hardware, but the video was freaking funny and I watched it anyway! xD
I keep forgetting how ridiculously huge the 4090 is until it's compared to other things.
I almost got tempted to buy one today, then I remembered how absurdly large and power hungry it is and I don't want to encourage them.
@fakecubed yeah I'm skipping this generation. If something happens and my 2080 breaks, then I'll get a 3080 or maybe possibly an AMD card, but no way am I touching the 4000 series.
@@AgentJ1314yea i just got a 6950xt for 800 recently on amazon. great steal
The moment I read this lynus started talking about that
The 4080 is also massive
I actually work in purchasing and deploying equipment like this. We've been scratching our heads over how to properly cool Genoa systems. Supermicro's website includes notes that in order to support higher CPU TDPs "special requirements" exist. When we've spoken with Supermicro they've told us that (as of Milan) above 220W requires liquid cooling. But here you are air cooling the 9654. This brings me to a question: What thermals were you seeing when running the CPU(s) at full load? What CPU socket temps, etc? Thank you for this video and many more.
Actually just running one is different than running them in the Datacenter, I have to say that Supermicro is very cautious because you want something like this not burning your rack 😂
They can get away with air-cooling it because they only ran one of them. Datacenters require multiple of these all tucked together, so liquid cooling with high whining airflow is going to be a must
use an air compressor
The answer is just one Wendell @Level1Techs away - check the forum over there!😊
You can cool more than 220W on air, but when you have a bunch of machines in a cabinet, the air flow design becomes important. If it's just a cabinet sitting in an air-conditioned room, that's not good enough. It should be a cabinet where the hot air out the back is collected at the top and ducted away, and all empty slots at the front are covered with blanks. So the only air path entails cool air coming in the front, hot air going out the back, and all that hot air being ducted away through the top. It's really above 280W that you need to seriously consider water cooling, which means not a typical data center. And to reach the max 400W TDP, you need to know exactly how you're going to cool it. AMD only put the 400W capability in because customers requested it.
Finally slots to add memory. That only took 40 years. Really cool stuff. Thanks.
3:51 How many things should I ACTUALLY check the documentation for with these kinds of things? 7:05 16x slot of PCIe gen 1*.1* 11:04 Another cool thing to if you've read documentation.
You should totally try the software renderer in Crysis again! See how good these CPUs are at being a GPU!
But can it run Crysis? /s
@@CoreDreamStudios Probably not these CPU's suck for gaming.
@@DarthMuse He was talking about software renderer just to see how it compares to other CPUs.
Would be good to quickly see it as an aside again in another episode, just to see how it compares. I'm betting it would at least be playable this time. What was it before? Like 12fps?
I just need Ryzen 7700 coming January 10th 2023 to build my 4K 240fps PC. But idk what gpu, all high power earth killers higher than $400 price.
The Blender result isn't necessarily another dual-Genoa system. Blender tests have a single-threaded setup period at the beginning. With such a short render time overall, that setup period becomes significant. So if you have, say, 64 total cores with an EPYC F-series chip (the higher frequency models), the setup period will complete faster, allowing the render proper to begin faster. The actual render time could easily take longer, while the total task time is shorter due to a faster single-core speed.
Windows cant eat all that cores you need linux to squeeze all that cores. Poor tim cook he must be salivating
I would like to see a benchmark test with Maxwell Render. It's the slowest compared to other renderers but the results are photo-realistic. I would say its my favorite.
Merry Christmas to everyone at Linus Tech Tips.
gotta love that AMD sockets are starting to approach a pin count equal to an SD monitor's pixels
It’s almost stupid to think that just 30 years ago we couldn’t even have one mb of vram and we were not hitting even 200 mhz frequencies and now we got 96 core CPUs boosting to very good frequencies
1992 had CPU's that hit 200mHz by 95 there were some hitting 500mHz. Meanwhile I had a 386DX reading about them Look up Digital Equipment Corporations play in early computers.
3 decades ? Wait just 10 years ago sandy bridge Xeon had max cores of 8 and we know when intel increased 4 cores to 6 with 990x today the top desktop one is 24 core
@@idzkk albeit 8 p core
@@idzkk And we still don't need more than 6 cores..
@@Malc180s Speak for yourself bud.. I absolutely need more than 6 cores. I wouldn't get half of my work done on time without the extra compute.
11:11 I was sitting in my car thinking that my city was running a nuclear sound test warning ⚠️
Love that sweater, Mr. Linus
As a career sysadmin I'm glad to see LTT doing way more videos on datacenter infrastructure and getting tours of them and fabs. Also, lmao at the 4090 being used to compare how "absurdly" giant something is. And obviously size doesn't matter Linus! you guys have like 4 kids.
They should hire someone to make good sysadmin // network engineering content that’s entertaining enough for young people to learn and potentially find a career in, Linus kinda bodges all the infra for his company cause he enjoys that but if they had someone making videos in a way that captivates his current audience but has best practices and all that, it could get alot more people into our industry
@@tranquil14738 easier said than done. Let's be real, it's very hard to hype up a content inherently boring, like servers.
@@hueanao Lots of people such as me find servers exciting
@@hueanao I might be a loser but I find the topics very interesting I just find the videos online very monotonous and monochrome
@@tranquil14738 I think the topics are very interesting, they just need to be delivered in an interesting manner. and ltt media has managed to make videos on computing compelling to watch, so I'm sure they can do the same to data center infrastructure
As someone who occasionally works with HPC servers but never a 40 series card, that comparison at 9:19 is wild. The 4090 is too damn big.
But, how else am I going to overcompensate for relatively cheap??? It’s not like I have an extra $100k(+$20) for a lifted truck, w/ flesh-colored Truck Nuts.
Just another reason to never buy one.
the 5 is going to be 4 full slots. the 4 series is only 3.5. so prepare for them to get even bigger
not just big but heavy as well. if the 50 series is going to be bigger very few will have cases big enough to fit it in. Ive got a full size case and the 4090 only just fits in. had to take some HHD enclosures out to get it in. mine came with a support rod to support the weight of it lol. it screws into the rear end closest to the front of the case and it sits on the bottom of the case to give it extra support. had probs trying to fit the support rod as i have an intake fan on the bottom of the case where the support rod is supposed to go. its the length that is the prob with them not how many slots they take up. No way will u be able to put 2 of them in a case and use SLI with them cause the motherboard would buckle under the weight of them. you also have to keep an eye on the 40 series cards as the power cables from the PSU to the GPU can melt and catch fire. Nvidia had to get new cables made so the newer cables should be fine but if u get 1 of the 1st gen cables that come with the GPU it could melt. Its a 16 pin cable but not the same type other GPU use. its 12 pins then another 4 smaller pins so an adaptor is supplied so it can be connected to a PSU and its the adaptor that has the fault. the adaptor has sockets to fit in 3 8 pin cables from the PSU. its where the adaptor fits into the GPU where it tends to melt and catch fire if its not put in correctly or becomes lose and its not a very tight fit so can come lose very easy. seems Nvidia r scamming a bit as well turns out the 4080 is a 3080TI rebadged and put in a bigger case
@@cliffbird5016 What you said about Nvidia scamming is baseless bullshit. The 4080 has 4 more gigs of vram than the 3080ti and its boost clock is nearly a gigahertz higher.
That's gonna really make my Windows for Workgroups 3.11 fly!
It will be an amazing server cpu! With modern programming languages it is very common to benefit from multicpu alot. You can run dozens of docker images with it. But i doubt you would need a cpu like that in a pc :D
7:14 "I wouldn't marry you if I thought it mattered" -Yvonne probably.
Back in around 2015 when I was still in college, AMD was particular with hiring. They hired those mostly from the best unis and it finally paid off.
@@thomasb282 You apparently forgot that AMD actually got a nice headstart in the GPU market by acquiring ATI. They already proved they were capable and weren't all that small. You make it sound like AMD cut their tiny little corner shop in half to make GPUs one day.
My company just recently started using supermicros. Good products.
Cool, the slots remind me of AGP from back in the day.
A normal motherboard can support ECC and non ECC memory. The different notch is because EPYC and other server platforms REQUIRE registered memory, while platforms like am4 only support unbuffered dimms, which can still be ECC (with an intel W480 motherboard, for example).
AM5 consumer CPUs supports both ECC and non ECC RAM
@@valentj3 yes but only unbuffered/unregistered, EPYC requires buffered/registered ECC RAM.
I think the notch is because DDR5 RDIMM's run off 12V while UDIMM's run off 5V. Why this is, nobody knows, especially with intel pushing 12VO and PSU 5V rails being not that great usually.
@@stephhugnis Many server power supplies are only 12v, even if they're not ATX.
@@Henrik_Holst ahh makes sense. Thanks for the info!
6:11 Brain can't comprehend, it's like we're 1 step away from literally downloading more RAM
5:30 linus just casually has a 4090 in the backround
Love that this KZhead don’t get that unbufferd ecc is a thing
imagine how fast this bad boy could print hello world to the screen
this wont really be faster than a normal PC (might even be slower)
depends on the screen
@@CalcProgrammer1 really doesn't. Printing to standard out is just writing to a file
@@hwstar9416 Depends what you mean by screen. You can write to stdout faster than stdout can be rendered to a framebuffer and stdout can be rendered to a framebuffer faster than your screen's refresh rate. However, the original question was explicitly how fast it could write to "the screen" and the screen only goes so fast.
@@hwstar9416 You must be fun at parties
I want a full episode like 16:18 where Linus and the team only say random sounds like this but still keep the intonation as if they are conveying actual message.
I'm waiting to see you put out some videos on quantum computing now.
Getting this for my next gaming pc
Reminds me when I installed 4x EPYC 7742 and 4x wtv the 8core was CPUs in new servers. They ate paste like crazy and had to go back to the store twice to get more. Was an honor.
man i can only imagine how surprisingly amzed look you would have
I use toothpaste instead of that "special" whatever. Never had a CPU burn on me.
@@domainmojo2162 If you think having a good box is "luck" then I feel sorry for you...
@@domainmojo2162 A lot of people are indeed unfortunate. I just wanted to point out that our computers are nothing that just boxes. People and real life is the deal! So, having the "best" PC would be less exciting that meeting a new good friend or finding a relationship, or adopting a child etc. And keep in mind that I only got to TRULY understand that recently! That's why I now thinking about keeping my current PC as long as it gets. Getting a new is just waste of money!
Damn. This is a crazy time to be a tech head
I didn't understand half of what Linus said, but I loved every minute of it, just knowing how absolutely monstrous this stuff is.
I’m not in the tech/computer industry and know just enough computer to do PC gaming and understand the basic computer terms… but I love how I find myself watching Linus’s videos and nodding like I understand what all these terms/phrases mean when he explains things beyond the general consumer level lol.
9:16 Why do I get the feeling that "Is it longer than a 4090?" going to be the next internet meme?
I work in animation, specifically CG Rendering, and while we are looking at Gen4 for our renderfarm upgrade - we are held back by the thread limits of our render engine software which at the moment is capped at 256 threads, so we are stuck for now on our decision!
Just disable SMT and run on 192 cores.
Maybe they shouldn't store the thread count in an u8 (2^8=256)
@@BurnsRubber Or buy the model with 64 cores.
Can't you render on GPUs?
1. Disable SMT and put each thread on its own dedicated physical core. 2. .....??? 3. Profit!!!
this made me feel like im a 1st grade student listening to calculus.. didnt understand shit but loved the enthusiasm linus had lmao!
I have the same setup with this cpu which I use to emulate Nintendo 3DS games. Peak utilisation right here fellas.
While the GPU market has been quite disappointing in terms of innovation this year, AMD has been killing it with these performance boosts with EPYC, I am willing to bet that these CPU's will absolutely DOMINATE the high end server market for the next few years
Well, as long as they can deliver them.
bruh what? Have you seen the 4090?
@@michaelhenlsy55 The 4090 especially is disappointing
@@stocky7134 what? If you wanted something better go work for the government to get their advanced shit in the cia or something. Go use their 8090 Super if they are even that slow.
I doubt it, amd and intel lack vision and are complacent.
If you are benchmarking Ubuntu, you have to set the scaling_governor to "performance" because by default, Ubuntu sets it to "powersave" even on Ubuntu Server.
you sure its not "balanced"?
@@gzqhesflexcl performance won't make you all your cores run "all the time" at max frequency. Also, RHEL distros uses performance by default.
That's fucking bizarre
Ahh that makes sense
Good i go to wonder is now a star citizen would run on this beast.
Just got a linustechtips ad on a linus tech tips video. Lol
12:12 i love how the lights on the graphics card progressively turn on as the fans ramp up
This monster has more cache than my first PC had hard disk space. What a time to be alive o.o
My first PC had 10MB of HDD. Took hours to defrag too.
13:40 Sir, I'm afraid you've gone mad with power 😆
that humming was driving me nuts.
The sound of that server at high speeds reminded me of a subway train standing in a station waiting for passengers. Awesome
Gets annoying after awhile. That's why remote capability is so important. Put the noise elsewhere.
We do quite a bit of molecular dynamics and quantum chemistry and these are pretty much unbeatable in terms of performance, especially when coupled with selected GPUs. For Gromacs nerds: 1M atoms, 27 ns/day @ 2 fs timestep, running on 32 cores + 4GPUs.
That fan's noise remember's me of F1 engine's sound
Linus, if you are building a system with even the older chips, you may find your self buying extra paste. I would get it from a local store and buy a little bit more than I will need just in case. If I have one I did not open I will just return it next time I am out that way. I could return the unopened ones or store them for future use. The only problem with storing it is I don't know when I might use it unless I decide to repaste a laptop or graphics card. I think if I was going to do a graphics card however, I would get that thick white paste that is like a liquid thermal pad or something. It is supposed to be really good stuff for where there are gaps between the memory and heatsink. I am sure I can get the name of the stuff easily with the help of Google or other searches. I may even order some of it too. It is always good to have some paste around, you never know when you will need some.
Watching him apply thermal paste was like watching someone cement a parking lot.
excessive thermal paste has been proven to maybe add 1 degree or temps lol
@@BaconSenpai cap
@@uchihasasuke7436 emphasis on "maybe" but really it's "at most"
AMD and Intel's rivalry is really driving innovation. I'm so glad to see both companies fighting to stay on top.
What an honour to be fron the town where it s dedicated 🥰
nice edit there :D that made me happy. :) 13:40
Things have come a long way since my first PC. That being the Amstrad PC 1512 with a 20MB HDD. That thing was so amazing I put 3 guys out of work in the printing press (sorry guys) I believe the DTP software I used was called Jetsetter!? I eventually upgraded to the 1640 - Happy days
I had an amstrad cpc464 green screen back in the day 💀
Wow, I'm so envious of Linus getting to play with ALL THAT AMAZING GEAR!! I live vicariously through you, sir!
hey linus i'm an audio engineer and was wondering if this cpu would be good to use on the program pro tools by avid.
2:28: Don't have E-ATX motherboards yet though! Thinking about upgrading my current Dual Xeon X5690 system to a new dual CPU AMD EPYC system, either this generation or last, and the last generation is the only one currently available.
It's great working for Supermicro and watching this cuz you'll never know if that's actually 1 of the system I built and test :^)
That record on y-cruncher, you could have also absolutely obliterated the 100B decimal digits record given that doing it entirely on memory is what gives you a tremendous advantage, and with all that cache included
god.... just imagining the fact that just *one* intermediate value in the algorithm could be approx 40GB in size makes my head spin.... *Then* considering that all of that could exist entirely in RAM just makes me collapse.
@@haniyasu8236 There is an entire microcosm of scientific programs that had to be written with the assumption that the users will never ever have enough RAM to fit the intermediate values into RAM.
@@TheBackyardChemist And I'd bet quite a few of them are approaching the point of being wrong.
@@Gabu_ Yeah, well kinda. Having 1 TB or 4 TB of RAM is now possible, but it is still not exactly cheap or common.
@@TheBackyardChemist Yeah, I'm aware. Still astounding anyways
There you go some serious power now
I need one for my simulations in CFD
Being able to reallocate PCI-e lanes for intersocket communication on dual socket servers is cool
Another AMD W? Splendid.
Yeah they need some wins after the 7900xt massive L
@@sigmamale4147 based sigma male
@@sigmamale4147 7900XT should have been between 600 and 700 me thinks
Your comment was copied by a bot btw
@@sigmamale4147 The 7900xtx wasn't a fail. Cheaper and smaller. AMD did say they weren't trying to compete with the 4090. Some people need Nvidia but for casual gamers and computer users, the 7xxx GPU series from AMD is the better choice.
WHOAAA...!!! THIS OWSOME...!!! REALLY EPIC...!!! MAKE IT MORE..!!! GOOD JOB LINUS..!! BEST CHANNEL EVER..!!!
the brilliance of scheduling the preroll to be one starring yourself, for a company that isn't even the "video sponsor" at first I was annoyed I wasted 20 seconds but lets be real it's just impressive hope all this extra ad viewing time can pay for an H100 vs A100 review
0:09 really like the transition here with the impact sound effect
Definitely gonna need to get this for my first pc build!
Yeah... thinking about buying one. Might be good for my FPS in fortnight.
I used to work in a server motherboard manufacturer, involved in the SMT and DIP process and believe me, the more pins a socket has, the more stress it gave me. This straight out gave me Vietnam-like flashbacks. The amount of scrap due to bent pins that we didn’t know where it happened is amazing. I still don’t know how our company had any profits.