Posts: 218   +7
Staff
Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.

Nvidia's latest Blackwell GPUs are experiencing unprecedented demand, with the company reporting that it has sold out of these next-gen processors. Nvidia CEO Jensen Huang revealed the news during an investors meeting hosted by Morgan Stanley. Morgan Stanley Analyst Joe Moore notes that Nvidia executives disclosed that their Blackwell GPU products have a 12-month backlog, echoing a similar situation with Hopper GPUs several quarters ago.

The overwhelming demand for Blackwell GPUs comes from Nvidia's traditional customers, including major tech giants like AWS, CoreWeave, Google, Meta, Microsoft, and Oracle. These companies have purchased every Blackwell GPU Nvidia and its manufacturing partner TSMC can produce for the next four quarters. The extreme demand indicates that Nvidia's already considerable footprint in the AI processor market should continue to grow next year, even as competition from rivals such as AMD, Intel, and various cloud service providers grab their share.

"Our view continues to be that Nvidia is likely to gain share of AI processors in 2025, as the biggest users of custom silicon are seeing very steep ramps with Nvidia solutions next year," Moore said in a client note.

Nvidia unveiled the Blackwell GPU platform in March. It includes the B200 GPU and GB200 Grace "super chip." These processors can handle the demanding workloads of large language model (LLM) inference while significantly reducing energy consumption, a growing concern in the industry.

Nvidia has overcome initial packaging issues with its B100 and B200 GPUs, allowing the company to ramp up production. Both GPUs utilize TSMC's advanced CoWoS-L packaging technology. However, questions remain about whether TSMC has sufficient CoWoS-L capacity to meet the skyrocketing demand.

Another potential bottleneck in the production process is the supply of HBM3E memory, which is crucial for high-performance GPUs like Blackwell. Tom's Hardware pointed out that Nvidia is yet to qualify Samsung's HBM3E memory for its Blackwell GPUs, adding another layer of complexity to the supply chain.

In August, Nvidia acknowledged that its Blackwell-based products were experiencing low yields, necessitating a re-spin of some layers of the B200 processor to improve production efficiency. Despite these challenges, Nvidia remains confident in its ability to ramp up Blackwell production in Q4 2024. It expects to ship several billion dollars worth of Blackwell GPUs in the last quarter of this year.

Permalink to story:

 
Nvidia has plenty of cash to raise a fab of their own.
 
Nvidia has plenty of cash to raise a fab of their own.
But not the technical know-how or experience… fabs don’t grow on trees… they take years of work and even the largest companies can fail…
 
They could easily buy TSMC, with a current market cap of 840B
lol, market cap doesn’t mean available cash… they have less than 50 billion cash and cash assets… so they’re about 800 billion short… and TSMC isn’t for sale anyways…
 
They could easily buy TSMC, with a current market cap of 840B

Just because a company has lower market cap don't mean they can be bought. Also, you would probably have to offer 50-100% over market cap to even get them to consider the offer. Most companies would not sell anyway.

Nvidia could buy AMD, Intel and TSMC if what you saying was true. Not going to happen.

Nvidia was not even allowed to buy ARM, so there's also laws preventing stuff like this.
 
5090 leaked price insignificanty more expensive than 4090. Hopefully that doesn't mean the performance is less than the typical 40% delta and closer the improvement we seen from 3090 to 4090.
 
5090 leaked price insignificanty more expensive than 4090. Hopefully that doesn't mean the performance is less than the typical 40% delta and closer the improvement we seen from 3090 to 4090.

I don't see 5090 beat 4090 by more than 50% I expect closer to 30-40% really and this is in UHD+

In 1440p the difference will probably only be 20-25%

5090 is going to be a nobrainer for 4K gamers, if they can afford it.
 
I don't see 5090 beat 4090 by more than 50% I expect closer to 30-40% really and this is in UHD+

In 1440p the difference will probably only be 20-25%

5090 is going to be a nobrainer for 4K gamers, if they can afford it.
Same if the price is similar than at best we will get the traditional 40% delta in my guesstimate.
 
Same if the price is similar than at best we will get the traditional 40% delta in my guesstimate.
Yeah would probably be okay considering same node, optimized 5nm.

The only way Nvidia could gain perf with 5000 series was:

1. Improve arch (does not look like it - don't think it was a focus at all, more like refined Ada)
2. Increase power
3. Increase chip size
4. Use faster memory, however will only be noticable in high res gaming and probably won't affect 1080p and 1440p perf much
 
5090 will absolutely be 2000 dollars minimum. Look at those specs again if in doubt. Nothing will come close and beat it until 6080/6090, in 2 years time. 6080 might even not beat it. Just like I don't expect 5080 to beat 4090.

512 bit bus with 32GB GDDR7 memory and like 50% more cores over 4090. This thing is going to be a beast.

Even 5080 will probably deliver close to 4090 performance.

4090 has been the best consumer card since release. 5090 will take this crown for another 2 years. This is how Nvidia can ask 1500-2000 dollars for a GPU, it will last 2 generations with ease and still be considered highend. This is what flagship products are about.
 
Last edited:
Yeah would probably be okay considering same node, optimized 5nm.

The only way Nvidia could gain perf with 5000 series was:

1. Improve arch (does not look like it - don't think it was a focus at all, more like refined Ada)
2. Increase power
3. Increase chip size
4. Use faster memory, however will only be noticable in high res gaming and probably won't affect 1080p and 1440p perf much
Makes sense hence the rumored atypical 512 bit vram spec to mitigate any controllable bottlenecks through memory bandwidth.
 
RTX 5000 series might get low availablity if true. Same node.

AMD has a golden oppotunity to grab marketshare, and then leaves high-end. Just typical AMD.
 
RTX 5000 series might get low availablity if true. Same node.

AMD has a golden oppotunity to grab marketshare, and then leaves high-end. Just typical AMD.
They haven't had this "golden opportunity" in years... they didn't give up on the high end "just because"... it was because they can't match Nvidia there...
 
They haven't had this "golden opportunity" in years... they didn't give up on the high end "just because"... it was because they can't match Nvidia there...

It's because even when matching raster and a slim single gen behind in RT/upscaling for half the price of the Nvidia peer they still got nothing but s**t on from all corners.
Why would anybody want to keep plugging on at great cost (they're probably barely meeting) with such odds? It's like expecting a contender who's gone longer vs Mike Tyson than anybody else to stay in the ring for a bonus round so they can 'do honour' by getting a TBI.
Nah, in the same way Nvidia's real interest and profits are elsewhere (with gaming barely a twitch of their overlarge tail) AMD are cutting their losses back to where there'll actually be some appreciation and gain.

It's a damn shame, and only time will tell how big a shame, but I'm with AMD on this... even if means possible issues for my own end down the line.
 
It's because even when matching raster and a slim single gen behind in RT/upscaling for half the price of the Nvidia peer they still got nothing but s**t on from all corners.
Their high-end cards haven't matched Nvidia, except in very specific use cases, in awhile... and when they have, it's at the cost of power consumption and heat.
AMD could probably come close to matching Nvidia's high end cards - but they'd consume insane amounts of power and probably run at >100 degrees.... AMD doesn't have the resources for R & D the way Nvidia does - they can't catch up and never will :(
Why would anybody want to keep plugging on at great cost (they're probably barely meeting) with such odds? It's like expecting a contender who's gone longer vs Mike Tyson than anybody else to stay in the ring for a bonus round so they can 'do honour' by getting a TBI.
Nah, in the same way Nvidia's real interest and profits are elsewhere (with gaming barely a twitch of their overlarge tail) AMD are cutting their losses back to where there'll actually be some appreciation and gain.

It's a damn shame, and only time will tell how big a shame, but I'm with AMD on this... even if means possible issues for my own end down the line.
AMD's GPU division (and ATI before them) have always found their greatest successes in the middle-low end cards that the masses can use. Maybe one day Intel can rival Nvidia at the high end - but I wouldn't hold your breath.
 
They haven't had this "golden opportunity" in years... they didn't give up on the high end "just because"... it was because they can't match Nvidia there...
They don't need to match Nvidia in entusiast tier, however, they should be competitive in the sub 600-800 dollar market and especially sub 500 dollar market, If they can't, then it is not looking good for AMD.

FSR and RT performance needs work as well, or it will bite them in the *** later - it already does actually. Tons of new games rely on upscaling and has RT elements that can't be disabled, only lowered, meaning AMD GPUs takes a big performance drop vs Nvidia cards.

Sadly I think AMD is washing money down the drain chasing AI marketshare instead - it is mostly a lost battle and they probably won't be able to catch Nvidia with much lower r&d funds in this area anyway.

AMD should really consider if AI is worth chasing too much. Bubble might burst in a few years and Nvidia has like 95% AI marketshare now. I think AMD and Intel only has a tiny marketshare due to companies don't want to wait for actual Nvida GPUs or can't afford Nvidia and not because AMD and Intel is actually good for AI, unless you needs them for niche calculations.
 
Last edited:
Sadly I think AMD is washing money down the drain chasing AI marketshare instead - it is mostly a lost battle and they probably won't be able to catch Nvidia with much lower r&d funds in this area anyway.
Just like AMD cannot catch Intel with much lower R&D funds.................................. ;)
 
Just like AMD cannot catch Intel with much lower R&D funds.................................. ;)
Except that Intel's market cap is under 100 billion USD and AMD's is just over 250 billion USD...
Intel chooses to spend more on R & D... and while AMD has been the CPU to beat for the past several years, it wouldn't be very surprising to see Intel go ahead again... it's not like AMD's 9000 lineup has been so amazing...
 
Except that Intel's market cap is under 100 billion USD and AMD's is just over 250 billion USD...
Intel chooses to spend more on R & D... and while AMD has been the CPU to beat for the past several years, it wouldn't be very surprising to see Intel go ahead again... it's not like AMD's 9000 lineup has been so amazing...
AMD started developing Zen5 somewhere around 2020. Long before AMD surpassed Intel on market cap.
 

Similar threads