NBCUni 9.5.23

Review: Razer Blade 15-Inch Mobile Workstation

By Mike McCarthy

I am always looking for the most powerful tools in the smallest packages, so I decided to check out the Razer Blade 15-inch laptop with an Nvidia GeForce RTX 2080 Max-Q graphics card. The Max-Q variants are optimized for better thermals and power usage — at the potential expense of performance — in order to allow more powerful GPUs to be used in smaller laptops. The RTX 2080 is Nvidia’s top-end mobile GPU, with 2,944 CUDA cores and 8GB of DDR6 memory, running at 384GB/s with 13.6 billion transistors on the chip.

The new Razer Blade has a six-core Intel i7-8750H processor with 16GB RAM and a 512GB SSD. It has mDP 1.4, HDMI 2.0b, Thunderbolt 3 and three USB 3.1 ports. Its 15.6-inch screen can run at 144Hz refresh rate but only supports full HD 1920×1080, which is optimized for gaming, not content creation. The past four laptops I have used have all been UHD resolution at various sizes, which gives far more screen real estate for creative applications and better resolution to review your imagery.

I also prefer to have an Ethernet port, but I am beginning to accept that a dongle might be acceptable for that, especially since it opens up the possibility of using 10 Gigabit Ethernet. We aren’t going to see 10GigE on laptops anytime soon due to the excessive power consumption, but you only need 10GigE when at certain locations that support it, so a dongle or docking station is reasonable for those use cases.

Certain functionality on the system required a free account to be registered with Razer, which is annoying, but I’ve found this requirement is becoming the norm these days. That gives access to the Razer Synapse utility for customizing the system settings, setting fan speed and even remapping keyboard functionality. Any other Razer peripherals would be controlled here as well. As part of a top-end modern gaming system, the keyboard has fully controllable color back lighting. While I find most of the default “effects” to be distracting, the option to color code your shortcut keys is interesting. And if you really want to go to the next level, you can customize it further.

For example, when you press the FN key, by default the keys that have function behaviors connected with them light up white, which impressed me. The colors and dimming are generated by blinking the LEDs, but I was able to perceive the flicker when moving my eyes, so I stuck with colors that didn’t involve dimming channels. But that still gave me six options (RGB, CYM) plus white.

This is the color config I was running in the photos, but the camera does not reflect how it actually looks. In pictures, the keys look washed out, but in person they are almost too bright and vibrant. But we are here for more than looks, so it was time to put it through its paces and see what can happen under the hood.

Testing
I ran a number of benchmarks, starting with Adobe Premiere Pro. I now have a consistent set of tests to run on workstations in order to compare each system. The tests involve Red, Sony Venice and ARRI Alexa source files, with various GPU effects applied and exported to compressed formats. It handled the 4K and 8K renders quite well — pretty comparable to full desktop systems — showcasing the power of the RTX GPU. Under the sustained load of rendering for 30 minutes, it did get quite warm, so you will want adequate ventilation … and you won’t want it sitting on your lap.

My next test was RedCine-X Pro, with its new CUDA playback acceleration of files up to 8K. But what is the point of decoding 8K if you can’t see all the pixels you are processing? So for this test, I also connected my Dell UP3218K screen to the Razer Blade’s Mini DisplayPort 1.4 output. Outputting to the monitor does affect performance a bit, but that is a reasonable expectation. It doesn’t matter if you can decode 8K in real time if you can’t display it. Nvidia provides reviewers with links to some test footage, but I have 40TB to choose from, in addition to test clips from all different settings on the various cameras from my Large Format Camera test last year.

The 4K Red files worked great at full res to the external monitor — full screen or pixel for pixel — while the system barely kept up with the 6K and 8K anamorphic files. 8K full frame required half-res playback to view smoothly on the 8K display. Full-frame 8K was barely realtime with the external monitor disabled, but that is still very impressive for a laptop (I have yet to accomplish that on my desktop). The rest of the files played back solidly to the local display. Disabling the CUDA GPU acceleration requires playing back below 1/8th res to do anything on a laptop, so this is where having a powerful GPU makes a big difference.

Blackmagic Resolve is the other major video editing program to consider, and while I do not find it intuitive to use myself, I usually recommend it to others who are looking for a high level of functionality but aren’t ready to pay for Premiere. I downloaded and rendered a test project from Nvidia, which plays Blackmagic Raw files in real time with a variety of effects and renders to H.264 in 40 seconds, but it takes 10 times longer with CUDA disabled in Resolve.

Here, as with the other tests, the real-world significance isn’t how much faster it is with a GPU than without, but how much faster is it with this RTX GPU compared to with other options. Nvidia clams this render takes 2.5 times as long on a Radeon-based MacBook Pro, and 10% longer on a previous-generation GTX 1080 laptop, which seems consistent with my previous experience and tests.

The primary differentiation of Nvidia’s RTX line of GPUs is the inclusion of RT cores to accelerate raytracing and Tensor cores to accelerate AI inferencing, so I wanted to try tasks that used those accelerations. I started by testing Adobe’s AI-based image enhancement in Lightroom Classic CC. Nvidia claims that the AI image enhancement uses the RTX’s Tensor cores, and it is four times faster with the RTX card. The visual results of the process didn’t appear to be much better than I could have achieved with manual development in Photoshop, but it was a lot faster to let the computer figure out what to do to improve the images. I also ran into an issue where certain blocks of the image got corrupted in the process, but I am not sure if Adobe or Nvidia is at fault here.

Raytracing
While I could have used this review as an excuse to go play Battlefield V to experience raytracing in video games, I stuck with the content-creation focus. In looking for a way to test raytracing, Nvidia pointed me to OctaneRender. Otoy has created a utility called OctaneBench for measuring the performance of various hardware configurations with its render engine. It reported that the RTX’s raytracing acceleration was giving me a 3x increase in render performance.

I also tested ProRender in Maxon Cinema 4D, which is not a raytracing renderer but does use GPU acceleration through OpenCL. Apparently, there is a way to use the Arnold ray-tracing engine in Cinema 4D, but I was reaching the limits of my 3D animation expertise and resources, so I didn’t pursue that path, and I didn’t test Maya for the same reason.

With ProRender, I was able to render views of various demo scenes 10 to 20 times faster than I could with a CPU only. I will probably include this as a regular test in future reviews, allowing me to gauge render performance far better than I can with Cinebench (which returned a CPU score of 836). And compiling a list of comparison render times will add more context to raw data. But, for now, I was able to render the demo “Bamboo” scene in 39 seconds and the more complex “Coffee Bean” scene in 188 seconds, beating even the Nvidia marketing team’s expected results.

VR
No test of a top-end GPU would be complete without trying out its VR performance. I connected my Windows-based Lenovo Explorer Mixed Reality headset, installed SteamVR and tested both 360 video editing in Premiere Pro and the true 3D experiences available in Steam. As would be expected, the experience was smooth, making this one of the most portable solutions for full-performance VR.

The RTX 2080 is a great GPU, and I had no issues with it. Outside of true 3D work, the upgrade from the Pascal-based GTX 1080 is minor, but for anyone upgrading from systems older than that, or doing true raytracing or AI processing, you will see a noticeable improvement in performance.

The new Razer Blade is a powerful laptop for its size, and while I did like it, that doesn’t mean I didn’t run into a few issues along the way. Some of those, like the screen resolution, are due to its focus on gaming instead of content creation, but I also had an issue with the touch pad. Touch pad issues are common when switching between devices constantly, but in this case, right-clicking instead of left-clicking and not registering movement when the mouse button was pressed were major headaches. The problems were only alleviated by connecting a mouse and sticking with that, which I frequently do anyway. The power supply has a rather large connector on a cumbersome thick and stiff cord, but it isn’t going to be falling out once you get it inserted. Battery life will vary greatly depending on how much processing power you are using.

These RTX chips are the first mobile GPUs with dedicated RT cores and with Tensor cores, since Volta-based chips never came to laptops. So for anyone with processing needs that are accelerated by those developments, the new RTX chip is obviously worth the upgrade. If you want the fastest thing out there, this is it. (Or at least it was, until Razer added options for 9th Generation Intel processors this week and a 4K OLED screen (an upgrade I would highly recommend for content creators). The model I reviewed goes for $3,000. The new 9th Gen version with a 240Hz screen is the same price, while the 4K OLED Touch version costs an extra $300.

Summing Up
If you are looking for a more balanced solution or are on a more limited budget, you should definitely compare the new Razer Blade to the new Nvidia GTX 16 line of mobile products that was just announced. Then decide which option is a better fit for your particular needs and budget.

The development of eGPUs has definitely shifted this ideal target for my usage. While this system has a Thunderbolt 3 port, it is fast enough that you won’t see significant gains from an eGPU, but that advantage comes at the expense of battery life and price. I am drawn to eGPUs because I only need maximum performance at my desk, but if you need top-end graphics performance totally untethered, RTX Max-Q chips are the solution for you.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.