Nvidia GeForce RTX 5090 tape-out rumors: smaller than the 4090, 448-bit bus, monolithic die

Daniel Sims

Posts: 1,424   +44
Staff
Rumor mill: The rumored specifications of Nvidia's next-generation consumer graphics cards have fluctuated for months but could be starting to solidify as their unveiling and release draw closer. The latest reports introduce slight changes to the expected memory configuration and physical size. The high-end and flagship products are still expected to launch before the end of 2024.

Nvidia tipster @kopite7kimi reports that Nvidia has taped out the GB202 and GB203 GPUs, expected to power the upcoming GeForce RTX 5090 and 5080 graphics cards. The flagship 5090 might not be as large as previously expected and could use a somewhat unusual VRAM configuration.

The RTX 4090 isn't just known for being the fastest consumer GPU – it's also ridiculously huge owing to the three-slot cooler used by Nvidia's Founders Edition. According to Kopite, its successor will return to a more normal-looking two-slot design with two fans.

Team Green's reasoning remains unclear, but the decision could indicate a significant shift regarding the 5090's wattage or shaders. The 4090 is a 450W GPU, and old leaks suggest that Nvidia previously tested a successor that could have been a 600W monster. Even if the company goes in the opposite direction, board partners could still introduce larger designs.

Meanwhile, Panzerlied – a known leaker on the Chiphell forums – claims that the GB202 incorporates a dense memory pattern with 16 modules and a 448-bit interface. This information counters prior reports indicating a 512-bit bus.

If the 5090 features 28GB of GDDR7 VRAM at 28Gbps as rumors have long suggested, then it might only utilize 14 of the GB202's memory modules. A 448-bit bus would theoretically give the flagship GPU around 1.5TB/s of memory bandwidth – a 50 percent increase over the 4090. Although the 5080 and 5070 are also expected to include GDDR7 RAM, they will likely feature only 256-bit and 192-bit interfaces.

Amid the recent reports, Kopite also reiterated prior information stating that the RTX 5000 cards, codenamed "Blackwell," will maintain monolithic designs. Rival AMD has been gradually moving toward chiplets with its CPUs and GPUs, but Nvidia has only utilized the format for its enterprise AI graphics processors.

Nvidia is rumored to be planning a late 2024 launch for the RTX 5090 and 5080, with the 5080 possibly shipping sooner. They might face new competition from AMD's RDNA 4 and Intel's Arc Battlemage.

Permalink to story:

 
Biding my time and waiting for more info, but I think a 5080 will be the successor to my 1080... though I swear I could likely get another 1-2 years out of it easy.

 
Biding my time and waiting for more info, but I think a 5080 will be the successor to my 1080... though I swear I could likely get another 1-2 years out of it easy.
Funny because it'll cost twice as much. The 5070 might be closer to the similar price lol
 
I actually think PC's have real good staying power now, because advancements in overall power just aren't there. We used to see a doubling of performance per gen, now we see half that.
Welcome to the end of Moore’s law. Pay more for less is the new trend moving forward from here on.
 
I actually think PC's have real good staying power now, because advancements in overall power just aren't there. We used to see a doubling of performance per gen, now we see half that.

Yeah the 3090 is 4 years old and still an high end performer. The 2080 TI even is only now starting to be a mid tier performer at 6 years old. These days every new generation is a bit more performance at a lower power draw type refinement instead of the massive gains we used to see each generation.
 
Lel toot :) this things might change if multi die and 3D stacking takes off, I know many burned CPU's from AMD will say it started badly and RDOA 3's multi-chip design is nothing steallar but starts are always like that, hard :) mybe with time foveros takes off and who knows mybe with Intel's budget frankestein lake will do multi-chips bether than AMD :) they know verry well that continue to make more littel becomed impossible, just look only 1 company in the world is able to continue to do it, it tell you everything, the multi-chip and 3D technologies are not ready right ? so insted they incrased the thermals to the sky, since when is it normal to run copper chips at 100 C and say it's ok ? since now - becuase they know they have nothing to sell, and look what hapen: Intel chips burned with the high relaxed mobo restrictions, AMD chips burned becuase of the 3D cache, video cards get burned from overloading power ..... take one or all - you know thare is problems :( but it can get bether :) the problem is they need money to develop multi-chip and 3D technologies, and we all know that this technologies are VERRY hard to do, customers can't give them money for nothing so they did this insted, give some performance from "overloading" so people give them the money to get it right (which is multi-chip and 3D) and this 2 are the hardest technologies to do :( so it takes mutch more time and money :(.... but then thare is the whole story of the carbon nanotubes, which is something I think no one in the industry wants to touch becuase they affraid it, they don't know whare it will go and if it will even go.... :(
 
2 slots for a more complex and powerful GPU on the same TSMC 5nm? I mean look, you can fit a single slot cooler to the RTX 4090 as well. But expect it to run badly due to throttling. Let’s see the end result.
 
Kinda silly to argue with jokes, but this really is a low-effort one. 4090 is 2 nearly years old and it's still a top performer - by a hefty margin. And I doubt the arrival of 5xxx will relegate it to mid-tier.
From the rumors the only gpu to dethrone the 4090 is the 5090. The 5080 will probably be similar in performance.


The 5090 went from a 512 bit 4 slot throttle monster to a 448 bit 28 gig vram 2 slot 2 fan design. If you ask ai how do you get the maximum public hype around a product it would probably look like this. Creat ridiculous fud then leak out something more reasonable so that Nvidia can stay in the endless news cycle. Rinse repeat. I bet Nvidia is directly behind the leaks, being the master of market manipulation, misinformation leaks rumors and all!
 
Biding my time and waiting for more info, but I think a 5080 will be the successor to my 1080... though I swear I could likely get another 1-2 years out of it easy.

For gaming, or for all the CUDA work you are doing?

Bcz if you are a Gamer, we already know that per-watt and per-size of GPU, RDNA is much more powerful for gaming, than Nvidia's non-gaming architecture, that was designed for another purpose.

 
If 5080 cant dethrone 4090, I will be dissapointed.
In dethrone I am sure we are on the same page in terms of rasterization, less so rt and even less so smoke and mirrors tricks called dlss. Although the 4080/super does sell because it is superior rt to the competition even at slightly higher 10% msrp and similar rasterization. If the 5080 has similar rasterization performance to the 4090 and superior rt performance at $999 it would be amazing. Unfortunately some and I included believe Nvidia will launch the 5080 at a similar price to the 4080 original msrp $1199; and eventually taper it to $999 based on market demand over time like they did with the super.

Update my original prediction was that it would match the 4090d in performance due to the ai threshold that is needed for the Chinese sanctioned market.
 
For gaming, or for all the CUDA work you are doing?

Bcz if you are a Gamer, we already know that per-watt and per-size of GPU, RDNA is much more powerful for gaming, than Nvidia's non-gaming architecture, that was designed for another purpose.
But at the end of the day, the "per-watt and per-size" doesn't matter for many of us. If Card A gives me higher fps and better RT capabilities than Card B, I'm going with Card A (price dependent) - regardless of the purpose for which it was designed.
 
But at the end of the day, the "per-watt and per-size" doesn't matter for many of us. If Card A gives me higher fps and better RT capabilities than Card B, I'm going with Card A (price dependent) - regardless of the purpose for which it was designed.
At the end of the day..
The card with the more potent gaming architecture, that is engineered 100% for games... is better than architecture engineered for crypto and CUDA work.

The only reason the 4090's Ada Lovelace architecture beats the RDNA in the XTX at Games, is because it's $1k more and twice as big. Understand? And If you downsize Ada Lovelace to a smaller 4080 (it's still bigger, still more energy, still more money), NV's architecture still gets beat by RDNA in frames.

RDNA is for gaming, CUDA is for work.
 
For gaming, or for all the CUDA work you are doing?

Bcz if you are a Gamer, we already know that per-watt and per-size of GPU, RDNA is much more powerful for gaming, than Nvidia's non-gaming architecture, that was designed for another purpose.
To be able to do whatever tf I want to do... really, really fast. 😁
 
At the end of the day..
The card with the more potent gaming architecture, that is engineered 100% for games... is better than architecture engineered for crypto and CUDA work.

The only reason the 4090's Ada Lovelace architecture beats the RDNA in the XTX at Games, is because it's $1k more and twice as big. Understand? And If you downsize Ada Lovelace to a smaller 4080 (it's still bigger, still more energy, still more money), NV's architecture still gets beat by RDNA in frames.

RDNA is for gaming, CUDA is for work.
Highest fps and RT performance at the top tier is a WIN, regardless of what it was "designed" for. Why can't AMD offer a challenger to Nvidia's best if the NV cards aren't even engineered for games (as you claim)? Looks to me like the 4090 offers the best of both worlds, price aside.

AMD isn't the mom & pop store that's a beloved local gem, struggling in the shadow of the Walmart next door, as so many fanbois like to portray. The bottom line is that AMD's architecture has always been a gen or two behind Nvidia's. And the driver situation has bitten them in the azz more than a few times.

I prefer to buy products that work as they should out of the box. AMD burned through their "Please be patient, we're a struggling but virtuous company and are still working on it to make it the best-est! Now who want's some fine wine!?" a long time ago. They have plenty of capital to compete with Nvidia. Bottom line is, they're fully committed to an architecture that is simply inferior to Team Green. Clearly, CUDA is for work AND gaming!
 
Back