Sadly, those 12GB, 16GB, and larger VRAM buffers are being underutilized here
Let’s take a moment to thank NVIDIA for the stasis in texture quality that we’ve experienced in the last years in most games. Their seemingly endless array of high-performance GPUs with insufficient VRAM comes to mind. Cards with 3GB, 3.5GB (yes, you know the one), 6GB, and 8GB have hindered the development of exceptional game visuals. Then came the 3080 10GB to tell you again, that VRAM is not that important in a fast GPU. And do not forget that AMD did the same until RDNA-2.
This situation reminds me of Intel's approach to core counts in CPUs from Haswell on, which stagnated for years. For nearly a decade, you could rely on a 4-core/8-thread CPU, like the 4790K, due to Intel's deliberate slow pace of innovation. It wasn’t until AMD's RYZEN, but when competition came along, Intel suddenly began to heat up core count like stupid.
So, we need a healthy market with robust competition to drive progress. However, consumer awareness is equally important. Articles suggesting that "8GB of VRAM is enough" (such as those on this site, even from the same reviewer) haven’t been particularly helpful. I welcome any shift away from the "8GB will be sufficient for a long time" mantra. Additionally, we need more high-resolution texture packs as add-ons to set consumer expectations for new GPUs. Otherwise 12GB will be the new 8GB for a long time.
BTW:
Let’s not forget that this prolonged strategy of inadequate VRAM also impedes the development of local AI solutions, both now and in the near future. This issue even affects gaming, particularly as new AI-driven games are developed. NVIDIA’s own introduction of NVIDIA ACE for Games technology highlights this problem. Shooting yourself in the foot, one could say.
They used a low-standard LLM with 4 billion parameters (very very small) because their mid-range cards lack the VRAM needed to handle both the game and the AI model simultaneously. This was a misstep, and it will take years to address the resulting limitations on a large scale. So thanks, Nvidia
