Skip navigation


4 Posts authored by: scott.wasson Employee

If you’ve been around PC hardware and software for a while, you’ll know there’s a constant tug-of-war between software needs and hardware capabilities. Although hardware consistently grows more capable over time, applications are always asking for more. That’s why we built the Radeon™ VII graphics card with a remarkable HBM2 memory configuration with 16 GB of VRAM capacity and one terabyte per second of bandwidth. This new GPU is meant to handle some of the most demanding new workloads that gamers and content creators can throw at it.


Content creation professionals will appreciate how the Radeon™ VII's 16GB of VRAM capacity frees them to manipulate exceptionally high-quality assets, like 4K and 8K video, while using their traditional tool chains.

For a very common example, here’s a look at measured VRAM usage in Adobe Premiere while encoding 4K and 8K video on the Radeon VII.




VRAM Used 

Adobe Premiere 

8K Video encoding 

11.5 GB 

Adobe Premiere 

4K Video encoding 

10.2 GB 

Test system configuration in endnote[i]


Of course, every workload is different. Video formats continue to evolve with support for higher pixel counts, HDR, wide color gamuts, and high frame rates. Other content formats are growing their fidelity in response. Creative professionals will know their data sets and understand where Radeon™ VII 16GB of VRAM can improve their workflows.


Meanwhile, game developers are always pushing the boundaries with breathtaking visuals and huge, open worlds to explore. The latest games include ultra-high-quality art assets that stand up to scrutiny on high-PPI monitors, and they support deeper color formats for high-dynamic range displays. They also use some nifty tricks to maintain steady frame rates, like adaptive quality and dynamic resolution scaling. Each of these things can improve the gaming experience—and together, they can look absolutely glorious in motion—but they all require larger video memory capacity.

As a result, today’s games can exceed 8GB of VRAM usage at their highest quality settings.  Here’s a look at the VRAM allocation we measured while running some popular games.



Release Date 

VRAM Used 



Call of Duty: Black Ops 4 

Oct 2018 

11.9 GB 


Highest settings 

Far Cry 5 

Mar 2018 

12.9 GB 


Highest settings,  
Dynamic Resolution 

Rise of the Tomb Raider 

Jan 2016 

9.8 GB 


Highest settings 

Star Control: Origins 

Sep 2018 

9.3 GB 


Highest settings 

Resident Evil 2 

Jan 2019 

8.8 GB 


Highest settings 

Tom Clancy's  
Ghost Recon Wildlands 

Mar 2017 

8.2 GB 


Highest settings 

Test system configuration in endnote[ii]


This list is just a start. As you probably know, this is an especially busy season for the game industry. We expect this list to grow as new titles are released.



Of course, tools that measure VRAM allocation don’t always tell the whole story. Applications sometimes fail to let go of bits they’re no longer actively using, so indicated usage exceeding your video card’s VRAM capacity doesn’t always lead to obvious complications. When a system does run up against a VRAM limit in practice, though, the result can be severe slowdowns or even instability.


Here’s one example of what happens when a game overruns the video card’s VRAM capacity. Below is a plot of the frame times over time while walking through the Montana forest in Far Cry 5.[iii]



The 8GB card suffers from intermittent spikes to very high frame times. While playing, you’ll perceive these spikes as slowdowns and stuttering. With its larger memory capacity, the Radeon VII maintains fluid animation, while the competition cannot. This difference may not be obvious when looking at an FPS average, but you’ll definitely notice it while playing a game.


That’s the case for higher VRAM capacity in a nutshell: when you need it, you’ll really wish you had it.


Radeon RX Vega owners may be wondering how their cards fit into this picture. The “Vega” architecture has a feature meant to deal with situations like this one: the High Bandwidth Cache Controller (HBCC). The HBCC reserves a portion of system memory for use by the GPU, effectively extending the VRAM capacity. The HBCC then manages the migration of data between local VRAM and system memory, making sure the right bits are in VRAM as needed.

Here’s a look at how the Radeon RX Vega 64 performs with and without the HBCC enabled.iii



As you can see, turning on the HBCC reduces the number and severity of the stutters in this scenario. So the HBCC is doing its job well.


In fact, the results below from multiple test runs tell an even more interesting story. We’re looking at “time spent beyond 50 ms,” an indicator of the amount of stutter. This metric adds up all the time across our test run where the delays between frames are more than 50 milliseconds. The higher that number, the rougher the animation. The lower, the better.


Graphics card

Run 1

Run 2

Run 3


Radeon RX Vega 64 LC 8GB


Radeon RX Vega 64 LC 8GB HBCC


Radeon VII 16GB


GeForce RTX 2080 8GB




The HBCC greatly reduces slowdowns, especially in the later test runs, once the HBCC’s caching algorithm understands how the application is using its data.


Meanwhile, the Radeon VII with 16GB virtually eliminates stutter as measured by this metric. However, Radeon VII owners can rest easy in the knowledge that they, too, can enable HBCC if needed when future applications overrun their card’s 16GB of VRAM.


Scott Wasson is Sr. Manager, Product Management for AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third-party sites are provided for convenience, and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied. GD-5


The information contained herein is for informational purposes only and is subject to change without notice. Timelines, roadmaps, and/or product release dates shown are plans only and subject to change. “Vega” is a codename for AMD architectures, and is not a product name. GD-122


[i] Testing done by AMD performance labs 2/1/19 on Intel i7 7700K,16GB DDR4 3000MHz, Radeon VII, AMD Driver 18.50 and Windows 10. Using Adobe premiere video encoding at 8k and adobe premiere encoding at 4K: Radeon VII used 11.5 GB and 10.2 GB of memory respectively. PC manufacturers may vary configurations yielding different results. All scores are an average of 3 runs with the same settings. Performance may vary based on use of latest drivers. RX-296


[ii] Testing done by AMD performance labs 2/1/19 on Intel i7 7700K,16GB DDR4 3000MHz, Radeon VII, AMD Driver 18.50 and Windows 10. Using Far Cry 5, Call of Duty Black Ops 4, Rise of the Tomb Raider DX12, Star Control: Origins, Resident Evil 2 and Tom Clancy’s Ghost Wildlands. Radeon VII used 12.9 GB, 11.9 GB, 9.8 GB, 9.3 GB, 8.8 GB and 8.2 GB of memory respectively. PC manufacturers may vary configurations yielding different results. All scores are an average of 3 runs with the same settings. Performance may vary based on use of latest drivers. RX-295


[iii] Testing done by AMD performance labs 2/1/19 on Intel Core i7-5960X (3.0GHz), Gigabyte X99-UD5 WiFi, 16GB Corsair Vengeance LPX (2x8GB) DDR4-2666MHz, Windows 10 64-bit version 1809, display drivers 18.50-RC11-190110, NVIDIA Driver 417.71 WHQL. Using Far Cry 5 configured at 3840x2160 vis VSR/DSR, 2560x1440 target display, Ultra quality presets, HDR10, dynamic resolution enabled: Radeon RX Vega 64 LC averaged 142 ms, Radeon RX Vega 64 LC HBCC averaged 46 ms, Radeon VII averaged 1 ms and RTX 2080 averaged 622 ms above 50 ms. PC manufacturers may vary configurations yielding different results. All scores are an average of 3 runs with the same settings. Performance may vary based on use of latest drivers. RX-297


[Originally posted on 07/26/17.]


A key focus in PC gaming in recent years has been providing smooth, responsive gaming experiences. The goal has been to produce consistent and fluid animation in combination with minimal input lag—the shortest possible delay between pressing a key and seeing a response on-screen.


Since the beginning of PC graphics, one of the biggest problems on this front has been synchronization between the game’s animation and the display’s update rate. Most displays update themselves at a fixed rate, typically at 60Hz or 60 times per second, in fixed steps. Meanwhile, games and other 3D graphics applications can produce new frames of animation at different rates, and those frame rates tend to vary over time. Often, much of what we perceive as slowdowns or sluggishness when gaming involves poor interactions between these two timing loops.


In fact, on a 60Hz display, animation can look more uneven when a game is running at 40 FPS than at 30 FPS, because at 40 FPS, the display is updated in an elliptical pattern:



*Game images from Quake Champions1


Versus a more even pattern at 30 FPS:



*Game images from Quake Champions


You’re seeing a less pleasing pattern of animation, even though the GPU is cranking out frames at a higher rate, thanks to a timing sync issue between the game and the display.


We’ve come up with some outstanding technology to address this problem, most notably Radeon™ FreeSync technology for compatible monitors with variable refresh rates. I could talk about the theory all day, but you have to see FreeSync in action in order to appreciate it properly. Once you’ve experienced it, you won’t ever want to go back to gaming on a fixed-refresh display.


FreeSync as it stands now is excellent, but we can do even more to help owners of fixed- and variable-refresh displays alike. We’ve been working on this problem ahead of the Radeon™ Vega RX graphics launch, and the result is a new feature known as Enhanced Sync. Enhanced Sync is included in Radeon™ Software Crimson ReLive Edition 17.7.2, and it’s supported on the upcoming Radeon RX Vega cards and on “Polaris”-based cards in the Radeon™ RX 400 and Radeon™ RX 500 series.


Enhanced Sync looks to tackle two different aspects of the GPU-display synchronization task, with the goal of providing a better combination of responsiveness and image quality.


The first problem it tackles is what happens when the game wants to run faster than the display’s refresh rate. It’s nice when your PC is able to produce frames faster than your monitor can display them, but dealing with that situation still involves compromises.


One way to handle this scenario is with traditional vsync, where the display is updated with a new, completed frame at each refresh interval. Doing so looks nice and generally produces smooth animation, but it also effectively caps the game’s frame rate at the speed of the display refresh. For instance, on a 60Hz display, you’d be limited to 60 FPS. For many games, that also means that user inputs are only sampled 60 times per second, because the speed of the game loop is tied to the frame rate.  As a result, traditional vsync can increase input lag and reduce responsiveness, which is why many gamers elect to disable vsync.


Trouble is, going without vsync has its own problems. Without vsync, the driver will flip to a new display buffer as soon as the GPU completes a frame—even if the display is in the middle of drawing that frame on the screen. This approach cuts input lag, but it also leads to a nasty artifact called tearing, where portions of two or more frames are shown on-screen at once, often with visible seams running horizontally across the display. At high frame rates, one may see portions of many different frames on the screen at once, seriously compromising image integrity.




*Game images from Quake Champions


Enhanced Sync is a third approach to this problem. It lets the game run as fast as it wants without capping frame rates. With Enhanced Sync enabled, a game could in theory run at 240 FPS on a 60Hz display without issue. But Enhanced Sync doesn’t tear in this case. Instead, when it comes time for the monitor to draw a new frame, the most recently completed frame is displayed on the screen. Some older frames may be dropped if they are not needed. This approach maintains smooth animation, reduces tearing, and improves responsiveness by reducing input lag.


To get a sense of how well it works, we measured input lag for the two traditional vertical refresh sync modes (on and off) against Enhanced Sync in Overwatch using a high-speed camera. In this case, the GPU was able to run Overwatch at about 120 FPS unconstrained. These results show the amount of time that passes between a click and a response for each mode. As you can see, Enhanced Sync produces click-to-response times similar to vsync off—without compromising visual integrity by tearing2.




So that’s the first problem Enhanced Sync addresses, and I think it’s a better solution than the traditional approaches to vsync.


The second problem Enhanced Sync addresses is at the other end of the performance spectrum: what happens when the game runs much slower than the display’s refresh rate? Low frame rates present a different sort of challenge.


With traditional vsync, if the system can’t produce a new frame in time for the monitor’s next refresh interval, then the old frame is repeated again, and we wait another entire interval before updating the screen. Those waits can add up. On a 60Hz display, if the system can’t get a frame out every interval, then it’s immediately limited to 30 FPS or even 20 FPS after that. Frame rates will move up and down in stair-step fashion, and we tend to perceive this effect as stutter or slowdowns (the technical term for this stair-step effect is quantization). Worse still, stepping down to such low frame rates increases input lag and compromises responsiveness.


Enhanced Sync deals with this problem by taking a dynamic approach. Generally, Enhanced Sync will stay synchronized to the display in order to avoid tearing. If the frame rate drops far enough below the display’s refresh rate, though, it will dynamically choose to allow tearing in order to get new information on screen as soon as possible and to avoid that stair-step effect. Enabling tearing is a compromise, but it’s arguably the best way of dealing with this difficult circumstance3.




When Enhanced Sync does allow tearing, users should typically only see a single tearing “seam” on the screen at once, since the frame rate is low. And Enhanced Sync will automatically choose to stop allowing tearing once the game’s frame rate returns to a more comfortable level.


So Enhanced Sync improves on traditional vsync by combining two techniques. At high frame rates, it aims to provide a better mix of visual integrity and responsiveness. At lower frame rates, it uses a dynamic algorithm minimize both stuttering and input lag when the going gets tough.


At this point, you may be wondering how Enhanced Sync interacts with our FreeSync variable-refresh display technology. I’m happy to report that Enhanced Sync works alongside FreeSync to provide even better experiences.


Within the display’s FreeSync range, say 30Hz to 90Hz on some displays, FreeSync operates as usual. Frames are displayed when ready, at low latency, and with no tearing.


When the game’s frame rate exceeds the display’s peak refresh rate, Enhanced Sync works like it would with a fixed-refresh monitor in the same situation. The game is free to run as fast as it wants, uncapped, and the latest complete frame is displayed. If your monitor’s peak refresh rate is 90Hz, the game could still run at 120 FPS—without tearing, and with improved responsiveness versus traditional vsync at 90Hz.


At the other end of the spectrum, when the frame rate drops well below the FreeSync display’s minimum refresh rate, one of two things will happen.


On displays that support low frame-rate compensation (LFC), the FreeSync LFC algorithm kicks in to mitigate stutter without tearing. If LFC isn’t available, then Enhanced Sync will either sync or tear, depending on the application’s vsync settings.


I’m especially excited about the combination of Enhanced Sync and FreeSync with LFC. I think of it as a “best of all worlds” sync scenario, providing smooth animation at low latency with no tearing across the broadest possible range. FreeSync is already quite solid, but Enhanced Sync makes it even better.


Happily, Enhanced Sync is supported on all recent flavors of DirectX®, from 9 through 12, and it can be enabled in Radeon™ Settings under the vertical refresh sync drop-down menu. If you have a supported Radeon™ GPU, you can download the latest release of Radeon Software Crimson Edition and try it out for yourself. I think you’ll like it.




Scott Wasson, Sr. Manager, Technical Marketing for the Radeon Technologies Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies, or opinions. Links to third party sites and references to third party trademarks are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.



  1. Quake Champions logos and images © 2017 Bethesda Softworks LLC, a ZeniMax Media company. All Rights Reserved.
  2. Testing
  3. Testing conducted by AMD Performance Labs as of July 10, 2017 on the 8GB Radeon RX 580 with Radeon Software Crimson ReLive Edition 17.7.2, on a test system comprising of Intel i7 7700K CPU (4.2 GHz), 16GB DDR4-3000 Mhz system memory, and Windows 10 x64 using the game Overwatch on the epic preset. PC manufacturers may vary configurations, yielding different results. At 3840X2160, Radeon Software Crimson Edition driver 17.7.2 and 8GB Radeon RX 580 with Enhanced Sync ON had a 4.2ms2 variance and vsync ON had a 50.4ms2 variance, which is 92% lower variance. All times an average of 3 test runs. Results are estimates and may vary. Performance may vary based on use of latest drivers. RS-151


[Originally posted on 02/28/17.]


Compromise. It’s something many VR developers today deal with in their ongoing quest to nail the right mix of technical features and computational power for the best balance of performance and visual fidelity. Many of today’s big game engines use a technique called deferred rendering. Deferred rendering does all of the geometry work first and then shades pixels.That worked well on the last generation of consoles, but it’s not a great fit for VR.


With the forward rendering path in Unreal Engine 4, developed by the amazingly talented engineers at Epic, developers have more choice in how they render for VR, helping to achieve a stunning-looking game while delivering the high frame rates necessary for a good experience.


Discussed on stage at AMD’s “Capsaicin” webcast and press event at the 2017 Game Developers Conference, the forward rendering path provides a strong alternative to the popular deferred rendering method, allowing developers to hit the demanding frame rates necessary for smooth VR experiences with improved image quality. Forward rendering has been showcased in games such as Epic Games’ Robo Recall, and is planned in upcoming VR titles from awesome developers like First Contact Entertainment, Limitless Studios, and Survios.


Technically Speaking: Deferred vs Forward Rendering


Let’s dig in and talk about this a bit. Deferred rendering has a performance cost for each frame, in addition to higher GPU memory and bandwidth requirements compared to forward rendering . While deferred rendering does support some nice features like screen-space reflections, those features are generally too costly to use given VR’s ~90FPS   requirement.


Current head mounted display (HMD) resolutions being what they are, VR also really benefits from high-quality edge smoothing. Deferred rendering unfortunately doesn’t mix well with multi-sampled anti-aliasing (MSAA) because there are performance and image quality issues. But MSAA is arguably the best AA technique for VR. Post-process AA methods like FXAA don’t work terribly well with stereo views in VR. If you’ve tried a game that uses it, you know it doesn’t look good.


All told, AMD feels that deferred rendering exacts a toll in terms of time, memory, and image quality in VR, and the payoff just isn’t there.


The alternative here is to adopt a form of forward rendering. Interestingly, it’s not a new technique; in fact it’s how GPU rendering started. It’s lighter weight, simpler, and faster. Also, forward rendering works nicely with MSAA, letting us improve edge quality very efficiently. So we think forward rendering is often a better fit for VR applications.


We’ve worked diligently to test and optimize the forward rendering path in Unreal Engine 4.15 for the best performance on AMD hardware. A number of VR development partners are using Unreal Engine, and we showed the performance benefits during our Capsaicin event at GDC.


“AMD has been on a continuous mission to make VR accessible to as many people as possible, and Epic’s forward rendering path in Unreal Engine 4 is a big step in that journey,” said Raja Koduri, Senior Vice President and Chief Architect, Radeon Technologies Group, AMD. “Anyone who has experienced Epic’s Robo Recall will immediately attest to the benefit of forward rendering in VR. We are working with VR developers to explore the benefits of forward rendering, which can result in beautiful, high-performing games on Radeon graphics.”


AMD is working with leading game developers to explore the benefits of forward rendering in VR games, including:


  • First Contact Entertainment: First Contact Entertainment’s breakout game, “ROM: Extraction” is one of the most visually appealing and exciting VR releases, debuting this past December to rave reviews. Available today, First Contact Entertainment is releasing “Overrun,” a new content expansion to ROM: Extraction that makes use of forward rendering for unprecedented performance.
  • Limitless Studios: Directed by Matthew Ward and built in virtual reality using the Limitless VR Creative Environment, “Reaping Rewards” is an interactive VR experience exploring the emotional choices of a young Grim Reaper as you learn about life and death from your mentor. This interactive character-driven story harnesses forward rendering to bring the experience to life.
  • Survios: Since its Early Access release last year, Survios’ critically-acclaimed and award-winning game “Raw Data” has become a must-have title for all VR gamers. At AMD Capsaicin, Survios unveiled their highly-anticipated new title: “Sprint Vector,” which makes use of Unreal Engine 4.15 and forward rendering. An intense adrenaline platformer, Sprint Vector uses a unique intelligent fluid locomotion system to propel players through high-speed head-to-head races through challenging interdimensional obstacle courses.



Scott Wasson, Sr. Manager, Technical Marketing for the Radeon Technologies Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies, or opinions. Links to third party sites and references to third party trademarks are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.


[Originally posted on 11/22/17.]


If you’ve been lucky enough to try out virtual reality using the most popular PC-connected headsets, you know that they have an astounding ability to offer a sense of “presence”—to convince your brain you’re in another place. VR technology has incredible potential for gaming and other applications, but it’s also rather demanding. If your PC fails to send the next frame of the animation to your headset on time, you can lose that sense of presence. Worse yet, if your system really can’t keep up, repeated dropped frames can make the person wearing the headset feel awfully uncomfortable.


To avoid such problems, VR-ready PCs need a good, fast CPU and graphics processor—and the right software to drive it.


One of the best ways to meet the challenges of VR is by driving the CPU and GPU quickly and efficiently using DirectX® 12 and Vulkan®, two members of a new class of programming interfaces that give developers more direct access to the hardware. AMD has been a pioneer in next-gen programming interfaces, and we continue to work on building the drivers and software tools needed to enable great experiences.


So far, most VR applications on the PC have relied on the older DirectX® 11 infrastructure, but that’s beginning to change. Today, the folks at Futuremark are releasing an update to their popular VRMark® benchmark that adds a new test environment known as the Cyan Room. The Cyan Room benchmark runs exclusively in DirectX® 12, and it’s a nice demonstration of the potential for next-generation tools to make VR more compelling.



VRMark Cyan Room


The Cyan Room also highlights AMD’s continued performance leadership on this front. Here are some initial results from VRMark® Cyan Room, fresh from our performance lab.1




As you can see, the Radeon™ GPUs we tested have clear leads over their direct competition. What’s more, all the Radeon™ GPUs are meeting the key requirement for today’s VR headsets by delivering at least 90 frames per second in this test.1




VRMark® Cyan Room combines this solid performance with rich visuals thanks in part to the efficiency of DirectX® 12.


Because DirectX® 12 offers more direct control over the hardware, the developers at Futuremark could schedule work and arrange resources more optimally to make sure each frame of animation is rendered quickly. Meanwhile, asynchronous compute shaders allow multiple types of work to run on the GPU in overlapping fashion, keeping the graphics processor more fully utilized. That’s especially important for Radeon GPUs, which tend to have big, powerful shader arrays and robust support for asynchronous compute.


Next-gen tools have benefits on the CPU front, as well. The excellent results above come from a system based on a Ryzen™ 7 1800X processor. DirectX® 12’s more efficient model allowed the Cyan Room’s developers to eliminate unnecessary CPU overhead. At the same time, DirectX® 12’s improved threading allows the application to distribute work more effectively across a Ryzen processor’s multiple CPU cores and hardware threads.



VRMark Cyan Room


The VRMark® Cyan Room test points to a future where VR developers use next-generation programming interfaces like DirectX® 12 and Vulkan® to harness the full capacity of Radeon™ and Ryzen™ processors. With that sort of power at their fingertips, game developers should be able to create even more compelling VR experiences going forward.2 Futuremark’s ‘VR benchmark’ can be found here.



Scott Wasson, Sr. Manager, Technical Marketing for the Radeon Technologies Group at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies, or opinions. Links to third party sites and references to third party trademarks are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.



  1. Testing conducted by AMD Performance Labs as of November 18, 2017 on the Radeon RX 580 8GB, Radeon RX Vega 56, Radeon RX Vega 64, GeForce GTX 1060 6GB, GeForce GTX 1070 Ti Founders Edition, and GeForce GTX 1080 Founders Edition on a test system comprising of Ryzen 7 1800X CPU, 16GB DDR4-2933 system memory, and Windows 10 x64. The Radeon graphics cards were tested with Radeon Software 17.11.2 The GeForce cards were tested with the 388.81 driver. PC manufacturers may vary configurations, yielding different results. In VRMark Cyan Room using the no-headset option, the Radeon RX 580 scored 4721. The Radeon RX Vega 56 scored 7437. The Radeon RX Vega 64 scored 7776. The GeForce GTX 1060 6GB scored 3764. The GeForce GTX 1070 Ti Founders Edition scored 5950. The GeForce GTX 1080 Founders Edition scored 6437. In VRMark Cyan Room using the no-headset option, the Radeon RX 580 scored 103 FPS. The Radeon RX Vega 56 scored 162 FPS. The Radeon RX Vega 64 scored 170 FPS. The GeForce GTX 1060 6GB scored 82 FPS. The GeForce GTX 1070 Ti Founders Edition scored 130 FPS. The GeForce GTX 1080 Founders Edition scored 140 FPS. Performance may vary based on use of latest drivers. RX-169
  2. ©2017 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, LiquidVR, Radeon, Ryzen, Threadripper, and combinations thereof are trademarks of Advanced Micro Devices, Inc. DirectX and Microsoft are registered trademarks of Microsoft Corporation in the US and other countries. Vulkan and the Vulkan logo are registered trademarks of the Khronos Group Inc. VRMark is a trademark of Futuremark Ltd. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.

Filter Blog

By date: By tag: