We got a chance to test the mobile GeForce RTX 4090, the most powerful graphics card currently available in a laptop PC. Benchmarks, DLSS 3… here’s our full review.
after versions office » Of its Ada Lovelace architecture graphics cards, Nvidia announced at CES 2023 mobile versions of its RTX 40xx. In total, five GPUs have been announced, with more power of course, but also new technologies such as the arrival of Max-Q 5th generation.
We tested the GeForce RTX 4090 mobile to get an idea.
Technical specifications of the GeForce RTX 4090 mobile
Nvidia’s mobile GeForce RTX 4090 is the most powerful graphics card currently available in laptops.
|GeForce RTX 4090 mobile|
|clock rate||1455 – 2040MHz|
|TGP||80 – 150W (+25W)|
|Memory||16 Go GDDR6|
|memory bandwidth||256 bit|
It should be noted that this GPU therefore embeds more than 1,300 calculation units more than the RTX 3080 Ti (7424) mobile and that its TGP goes up to 175 W with Dynamic Boost 2.0.
Our build: the Razer Blade 16
To test the performance of this RTX 4090 mobile and check what these additional cores provide, Nvidia provided us with a Razer Blade 16 (2023). In addition to its brand new GPU, it is equipped with a 24-core/32-thread Intel Core i9-13950HX processor with a base frequency of 2.2 GHz (5.5 GHz in turbo), 32 GB of DDR5 RAM (2x 16 GB @ 4800 MHz) and 1 TB of NVMe SSD memory.
Ultimately, everything is powered by a 16-inch UHD+ display (3840 x 2400 pixels) at 120Hz. For this test, unless otherwise noted, results were obtained with the screen definition set to 2560 x 1600 pixels to adhere to the maximum. Common use. This is a definition that should be found more often on laptops in 2023.
Our tests were performed on Windows 11 (22H2 family) with Nvidia drivers at version 528.37.
The benchmarks of the RTX 4090 mobile
The mobile RTX 4090 on 3DMark
When it comes to measuring and comparing the raw graphics power of a GPU (especially without using AI), the reference point Synthetic 3DMark Time Spy Extreme is obviously a reference in the matter.
In this test, which outputs a 4K render (under DirectX 12), we got a graphics score of 10,518, with an average of 67.78 frames per second in the first part and 60.91 FPS in the second part. We noticed that despite the heating – up to 80° during our third consecutive pass -, scores never waver. Therefore, we are slightly below the desktop RTX 4070 Ti (10,946), but well above the mobile RTX 3080 Ti (6,513).
Remember that this test is purely synthetic, without ray tracing or DLSS.
In February 2022, we were saying that the mobile RTX 3080 Ti was very impressive in 3DMark’s DirectX Raytracing test measuring 36.94 frames per second, ” simply one of the best results we have recorded in this test“. A year later, the RTX 4090 mobile is able to publish an average of 68.70 FPS in this same test and maintain this score over time.
For comparison, the RTX 4070 Ti desktop computer averaged 66.44 FPS in this same test. The architecture of Ada Lovelace shows here once again its undeniable superiority compared to previous generations that could not exceed 60 frames per second.
The mobile RTX 4090 in V-Ray and OctaneBench
The architecture is handled for both gaming and professional use. This is what we wanted to measure with V-Ray and OctaneBench, benchmarks that evaluate the 3D modeling part.
In this little game, our test machine scored 747 on OctaneBench with the ray tracingenabled and 525 without the RT. In V-Ray, it goes up to 2550 on its CUDA cores and up to 3460 with the ray tracing .
Even for professionals, the RTX 4090 mobile is therefore the best option at the moment in a portable PC to gain efficiency.
The RTX 4090 mobile in the game
To further test the capabilities of the mobile RTX 4090, our first test focused on cyberpunk 2077a recent, resource intensive and DLSS 3 compatible game.
About reference point integrated into the game, with the parameters set to Ultra, we have thus reached the Average 97 frames per second with DLSS 3 in automatic (latency of 59 ms). In DLSS 2, the game runs at 65 FPS on average, with a latency of 45ms. If we go to 4K (3840 x 2400 pixels), we still finish the benchmark with an average of 76 FPS in DLSS 3 (51 ms of latency), which is already a lot for this game and this definition.
Setting the DLSS to “ ultra performance» with image generation (DLSS 3), then we uploaded to 96FPS in 4K for 61 ms of latency. Again, this is the performance you get with a desktop RTX 4070 Ti. how to say that for a laptop, that’s quite a feat.
With DLSS 3 we therefore obtain in 4K a performance more or less equivalent to that offered by the 3080 Ti mobile… in Full HD.
Spiderman Miles Morales
Go back to 1600p to test DLSS 3 on Spiderman Miles Morales a nervous game where the screen evolves quickly.
In the Marvel game, we recorded an average of 85 FPS on DLSS 2 with 40ms latency, versus 115 FPS and 70ms latency on DLSS 3. the profit of frames per second it’s obvious, while the difference in latency is not really felt controller in hand. And for good reason, as the difference in display lag is less than one frame between the moment you press and the moment the action appears on the screen.
A Plague Tale: Requiem
To take a slightly less flattering example for this GPU, let’s take a look atA Plague Tale: Requiemalso compatible with DLSS 3, still at 1600p and with Ultra settings.
In average performance, the game does very well, an order of magnitude equivalent to or even better than the desktop RTX 4070 Ti. Depending on the selected options, the game runs around 175 FPS (DLSS 3 Auto), even 216 FPS (DLSS 3 Performance Ultra). Here too, the small measured latency surplus does not disturb the progression of the solo game.
Nonetheless, the game illustrates the limitations of playing on a laptop. If these results are very good, even excellent, they are slightly clouded by the “low 1%”, that is, the average 1% of the lowest FPS. While the RTX 4070 Ti desktop clocks in at a solid 121 FPS, ensuring rock-solid smoothness no matter what, we notice a “low 1%” ranging from 50 to 67 FPS here.
The difference between the average and this minimum of 1% sometimes leads to slight latency visible in-gameespecially during long sessions, when the laptop starts to get hot.
5th Generation Max-Q Technologies
One of the main advantages of Nvidia cards lies not only in the power of the GPUs, but also in the entire ecosystem of applications designed for generations. These technologies, which Nvidia integrates under the name Max-Q, are here in their 5th generation. Some allow large performance gains, others are more difficult to measure.
and DLSS 3
The first one, we have mentioned throughout this article, since it is Nvidia’s flagship technology: DLSS. This technology ofsupersamplinghere it goes to the third generation and allows not only to extrapolate pixels creating a low-definition rendering, later scaled by Tensor cores, but also to generate images.
These additional images provide a profit offrames per secondTRUE, and therefore an impression of fluidity, but they come with two counterparts. The first is that latency is increased and the second is that these additional frames can only anticipate the movement of what is already on the screen. You won’t see your enemies coming out from behind a wall sooner despite your extra frames.
By themselves, these counterparts are entirely acceptable in single player games and allow for very appreciable comfort. During multiplayer games, the question arises and it will be up to each one. If you do not play at a competitive level, it is possible, but if you want to optimize your performance as best as possible and your configuration allows it (a priori with an RTX 4090 mobile it allows you), do not hesitate to stay. in DLSS 2.
Dynamic Boost 2.0 (Max-Q 3rd gen)
This technology allows your PC manufacturer to allocate more power to your GPU if your CPU doesn’t need it. In games that are very gluttonous in graphics resources and not very gluttonous in computing power, so you can get up to 25W of GPU power. This boost is dynamically retrieved from your CPU and reallocated as soon as the latter requires it.
This fifth generation focuses its innovations more on consumption than on performance (already greatly improved by DLSS 3). Next, Nvidia announces high-efficiency memory, low-voltage graphics memory, and three-speed memory control to consume less when power is not needed.
In fact, this does not directly affect consumption. During our tests, the GPU consumed 7 to 10 W on idle desktop and around 110 W in-game. Thus, we remain approximately at the consumption of the previous generation, however, with a large performance gain.
A mobile GPU designed for 4K
The RTX 40xx mobile range therefore promises to be equivalent to the desktop range in terms of intergenerational evolution. The gain is clear, especially in DLSS 3 compatible games. However, they are still few (about fifty counting those that will come soon).
In this RTX 4090 mobile, more precisely, we are in the presence of a GPU made for the most demanding, you will not find better in laptops today. When present, DLSS 3 lets you game without flinching at 1440p, or even 4K, with performance equivalent to the previous generation at Full HD, and with no power surge. Clearly, the performance-watt ratio is there.
Of course, excellence comes at a price. In this configuration, the Razer Blade 16 costs 5,400 euros. At this price, it would obviously be hard to recommend to ordinary mortals until more DLSS 3-compatible games are released. On the other hand, we’re looking forward to testing the rest of Nvidia’s 40xx range of phones, whose proudest representative lived up to the task. of our expectations.