My Thoughts on the RTX 5000 Series Announcement
Jensen showed the world a new set of GPUs (and a new leather jacket).
When the RTX 4000 Series was announced, I immediately wrote it off as a waste of money. Their rasterization performance didn't have a big enough uplift for me to justify the cost of an upgrade, and a high cost it was. Upgrading from my RTX 3080 to an RTX 4080 would cost me a whopping $1,199. I guess I could have sold a kidney to subsidize the cost, but then how would I pay for my next upgrade? Add to this that the GPU’s main selling point was an unproven “fake frame inserting” AI, and I was out.
My Current Rig, and Current Perspective
Currently, my gaming rig has an AMD 5800X and a Nvidia 3080 FE. A beast of a machine when I first put it together, but it’s starting to show its age a bit. If I wasn’t obsessed with tech and having the latest hardware, I wouldn’t even think of upgrading. I haven’t run into a single game I couldn’t play at reasonable frame rates and settings.
That being said, I do have a 4K OLED TV that I’d love to play on, and although I originally wrote off DLSS 3 as a gimmick, I’m willing to accept that I was wrong given how much people seem to enjoy the experience. I’m kind of excited to see what good devs can do in this dystopian future where only 1/16 pixels are actuall rendered.
Nvidia’s Marketing and the Current Landscape
Let’s get this out of the way. There’s no way I’m going to believe that the 5070 performs as well as a 4090 at only $549 without a proper review; even with all the RTX features. In pure rasterization, I’d have to see it in person and unbox the GPU myself to believe it. If I had to guess, it probably performs closer to a 4080 in rasterization, maybe a bit worse, but that’s really just a guess. Let’s see if I eat my words when the card is released.
But does it really matter if we’re talking about rasterization vs generated/AI performance now-a-days? I get that not every game uses RTX features, but any demanding game released recently that I could name uses (or in some cases, relies on) some sort of upscaling or AI features to perform well. Like it or not, this is the gaming landscape of 2025, and the way we think about the hardware should take this into account.
My Thoughts On the Lineup
I think that across the board we’re looking at a pretty amazing GPU lineup. At the top, the 5090 is expensive, very expensive. I’m actually not that mad about this. The RTX 5090 is so far above the other cards as far as specs go that I see it as the first true “Titan” class card since, well, the GTX Titan XP. It’s out of reach for me, and I’m just going to ignore it in my search for a new GPU. As long as developers don’t treat it as standard, or even semi-standard hardware, I think it’s fine to keep it up in dreamland.
Without taking into account architectural changes, the core count and memory speeds on the rest of the cards look good too. The 80 class card coming back down to $999 is still expensive, but less offensive than the previous generation. And if the 5070 can even come close to 4090 performance while using DLSS 4 and multi-frame generation (MFG) then I think that should be a good deal too. The only thing that worries me here is that Nvidia decided to keep this card at only 12GB of memory. Maybe the upgrade to GDDR7 will help keep this usable well into the future, but I’m not knowledgeable enough to tell if that will be the case. I suspect it won’t be.
The most interesting card in my opinion is the RTX 5070 Ti. With its 16GB of GDDR7 memory, its memory bandwidth (896 GB/sec) is within spitting range of the 5080 (960 GB/sec) and quite a bit faster than the 5070 (672 GB/sec). With the price and the “AI TOPS” falling right between both of its neighbors, I think this will be the best bang-for-your-buck Nvidia GPU this go around. I’m going to keep an eye on this one when reviews start to come out.