Fresh install…? Pff thats lot of work, sure that is needed. I have win 11 for a year now.
Maybe test between those 2? Fresh instal vs update…:) ;)
 
Dear Lord, yet another pointless 4090@1080p benchmark. This is so silly :)

But no doubt there will be a lot of sage pontificating in the comments, as if these results had some meanigful bearing on real life scenarios.

Also, saying that 5800X3D wasn't an expensive CPU is an icing on the cake. And 5xxx X family still remains a better proposition for somebody on a tight budget.
 
The 5800X3D will compare to the i5 2500K for gamers. It'll be slow for productivity but the cache really does prop it up heavily in games.
 
Strange article.. given you can still buy the 5700X3D quite readily, why wouldn't you compare that rather than the 5800X3D which is sold out nearly everywhere?

And given the 5700X3D has a 30% discount right now on Amazon... You'd think they'd want to get some extra click-through commissions?
 
Only some games can take advantage of the extra L3 cache, but when it does it's magic.
Also the 1% low benefit a lot when CPU bound. But as people here say it was not cheap at launch 450 Euro and not cheap now, In fact you can't find a separate 5800X3D where I live, only bundles are available now.
Sure it was close to 316 Euro sometime back, but for a short period.

From my point of view each GPU should be tested with a fresh install of the OS, cloning is easy now. Specialy when changing Nvidia drivers with AMD. No matter what DDU or other software you use, Windows is a garbage at registry level and system files.

I just saw it available at only one shop...for 500 Euro...damn.

L.E. the 7800X3d is cheaper, the tray version (no box) is 400 Euro.
 
5700X3D is good value (5800X3D is EOL and only 5% faster) but it won't come close to a 7800X3D or even a higher end Raptor Lake chip in most games. Also some games scale very well with DDR5.

Can't wait for 9800X3D. First 3D chip that won't get gimped on clockspeeds. Leaks look very promising. Like 20% improved ST and 30% improved MT perf over 7800X3D in Cinebench. That is impressive.

9800X3D is going to the the ultimate gaming chip, beating 9900X3D and 9950X3D too, due to being single CCD. Single CCD is always prefered for gaming.

3D cache is way less sensitive with 9000X3D in general. Clockspeeds will be way higher this time.

Might even replace my 7800X3D with a 9800X3D as I am CPU bound in many games (360 Hz user, might be going 480 Hz in the next year)
 
Last edited:
No Anno 1800 in a CPU bound gaming test?
 
Given that you've had to resort to a 4090 at 1080p to show a difference between the two - I think it's pretty clear cut why most gamers haven't bothered to upgrade to AM5 and Ryzen 9XXX yet.

Better to save your $$$ and put it towards the next expensive GPU.
 
They have to do the CPU testing with a 4090 in 1080p or even lower. Like every other benchmarking site. Otherwise the charts would be very boring for the readers.

In 4k I.e., one would think of the 9700x and 5800x3D as the same, almost to the point identical in gaming performance. The same goes with a GPU like a 3060 12GB or RX 6600 8GB, which seems more proportional to 1080p. So a CPU review needs to artificially create a CPU bound situation, and the 4090 with it's power is a godsend for that, even if it doesn't reesemle a real life scenario for a 4090.

So if you want to have some benchmarks with differences to talk about, you need to tolerate that. If you game at 4k or even 1440p, and or if you use a GPU <4090, like a 7800xt or 4070 or even slower ones, differences between CPUs will be smaller, of course depending in the games. Because then the situation changes to a GPU bound real world scenario, which is where most of us are or want to be in reality, and then even a slower CPU seems to perform on par with faster ones. But those reviews are telling us something important, nonetheless: How those CPUs will presumably age with new hardware (GPUs) and new software (games).

For example: Do prepare for lots and lots of 'absurd' 1080p reviews with the even mightier 5090, once it launches. Seems absurd, okay. But it's logical. Because you need the fastest GPU and lowest resolutions to clearly show the differences in CPU performance in this context. And a 5090, and that's the point, will show greater differences of the CPUs than todays flagship, the 4090. Those reviews will provide us with a better view on what type of silicon will age better with the faster GPUs and more demanding games of the future.
 
Last edited:
They have to do the CPU testing with a 4090 in 1080p or even lower. Like every other benchmarking site. Otherwise the charts would be very boring for the readers.
There's no point explaining it, comments that do not understand basic testing methodology, will never understand it.

Even funnier, when they do get given the benchmarks they ask for, they complain it doesn't really give them any information or simply say "see! Told you my 2500K is still perfectly fine to game at 8k".

So it's just trolling at this point, better to just ignore them or block them so you don't have to see there rubbish.
 
Aaand I'm still using that 2021 5800X due to this site's own comparison benchmarks showing little consistent uplift or coverage thereof at 4K (and me using 3440x1440 in the office and 4K in the lounge far less often) Just wasn't worth the £300 two years ago and not the same amount rn when I'd be running 2-3x the price eventually on a later 8c/16t X3D plus the necessary new mobo/RAM. As it is I'm waiting to see how the upcoming X3D's step up before even starting to make plans.

As an aside, when the question of better gaming fps came up it turned out to be better for consistent uplift and coverage to grab a 7900XTX, replacing a 6800XT which went to upgrade my gf's 3070 at 3440x1440 (so a two birds with one stone deal) for not much more than that 7800X3D and mobo/RAM to match the current anyway. And I'd still probably only be making use of some 50-60% of that CPU just like the 5800X too. Some of my games are notably CPU heavy (as well as often not appearing on console, which is a primary reason I go PC) but also generally sit in genres that don't require higher fps rates to remain enjoyable so it evens out.

Tbh I've been running to the 'higher end' for two gens now and as much as I appreciate the end result it can get exhausting. In future I think I'll just keep any plans to simpler questions answered and solutions delivered.
 
For example: Do prepare for lots and lots of 'absurd' 1080p reviews with the even mightier 5090, once it launches. Seems absurd, okay. But it's logical. Because you need the fastest GPU and lowest resolutions to clearly show the differences in CPU performance in this context. And a 5090, and that's the point, will show greater differences of the CPUs than todays flagship, the 4090. Those reviews will provide us with a better view on what type of silicon will age better with the faster GPUs and more demanding games of the future.
Or then not. Zen5 is first consumer class CPU to REALLY have good AVX512 implementation. How much future games will use AVX512? Oh, we don't know yet. Then what these 1080p benchmarks tell about how what CPU will be better on future? Basically nothing. AVX512 is about only technology today outside very large caches that can give high double digit performance boosts on games. Predicting how AVX512 will be implemented on future games is nearly impossible and so any "long term prediction" benchmarks are totally useless.
 
Test that always missing on X3D's reviews is 3D simulation. I'd like to know if it's actually an improvement compare to non 3D V-cache in that field I've read somewhere that it actually gives some improvement, but since they didn't provide a more detailed data, I can't exactly trust them.
 
I want to see online games in the testing such as WoW, FF XIV, New World, Throne and Liberty, etc. I heard that for MMORPG 3D V-cache has a very good uplift compared to regular games.
 
What changed in the Windows patch to cause such an uplift?
 

Similar threads