cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

Dual GPU Gaming Gives Up the Ghost as Nvidia Ends SLI Support

SLI — an acronym that originally stood for Scan-Line Interleave, then later for Scalable Link Interface — is, as of today, effectively dead in the form we’ve known it the longest. Modern GPUs that support DX12 support two forms of SLI: implicit and explicit. Implicit SLI is the mode used in DirectX 11, as well as previous versions of Microsoft’s 3D API.

Nvidia made this announcement in the release notes for Version 456.38 of its GeForce Game Ready Software. Beginning on January 1, 2021, no new implicit SLI profiles will be provided for any RTX 2XXX or earlier GPU. All GPU support for SLI going forward will be via explicit SLI, which is to say, the game will have to support the mode directly rather than implementing the mode in-driver.

There’s little reason to expect game developers to consider doing this, however. Of the various Ampere GPUs, only the RTX 3090 has the appropriate bridge connection points. Game developers aren’t likely to invest in optimizing for the tiny percentage of the market that buys such cards, and while they may implement explicit support in a few games going forward to serve existing SLI owners, I expect the feature to die now that Nvidia won’t be providing implicit driver updates any longer.

Article: Dual GPU Gaming Gives Up the Ghost as Nvidia Ends SLI Support - ExtremeTech 

2 Replies
leyvin
Miniboss

Since DirectX 12 and Vulkan released in their 1.0 Versions (2016 onward)., there has been little need for Graphics IHV to support their dedicated versions of Multi-GPU Support.

In somewhat basic terms both Crossfire and SLI were Driver Hacks to support Multiple GPU handling Graphics Processing... which wasn't particularly easy to implement and even when it was., because Synchronising the GPUs was quite difficult; generally the end result wasn't an ideal experience.

Sure, you might get 40-70% Performance Gains... but you also typically also had a lot of Micro-Stutter whenever the GPUs were out-of-sync; and this was an issue that progressively got worse with more GPUs. 

AMD was a major contributor to the Foundation of both DirectX 12 and Vulkan., this meant that these actually integrated a Hardware Agnostic version of Crossfire; which has become Multi-GPU.

This is actually disgustingly simple to support... and more importantly provides Developers with a lot more Freedom in regards to how they utilise it.

I will try to dig up my Prototypes Apps that I made when DirectX 12 first released., as I bought an A-Series APU specifically to experiment with Hybrid Crossfire and Multi-GPU... and the results were extremely awesome.

See historically speaking Driver mGPU Support essentially was AFB (Alternative Frame Buffering)., that is to say each GPU would handle the next Frame Render.

This wasn't how SLI originally worked of course., instead it Rendered the same "Screen" across the Multiple GPUs by breaking it down into 2, 3 or 4 Smaller Resolutions that were dispatched to each GPU to Render at the same time.

Neither approach is very useful as it essentially means you HAVE to be restricted to pairing identical GPUs., even then as noted there is Micro-Stutter as a result of each GPU not running at identical Frequencies or having identical workloads.

AMD came up with Hybrid Crossfire., which was arguably a good solution; albeit they basically never supported it (typical for AMD, to make something then ignore it exists)... but the idea itself was awesome and does work insanely well albeit still limited.

So how it worked was the Dedicated GPU would handle the "Frame"., but the Integrated GPU would be used as "Additional Cores" for processing the Frame; as such the Integrated GPU no longer needed to use the System Memory, instead the Dedicated GPU VRAM would be used; and this was essentially taking advantage of how the Hyper-Transport Bus worked. 

It's a great showcase of how frustrating it is, that AMD never even attempted to provide HXT as an alternative to PCIe in Consumer Products; instead ONLY supported it for Professional / Workstation / Server Hardware., where there was never any support and said areas are not where people are going to "Invest" in unproven technology; as there's too much risk involved. A desktop consumer however the only risk is to themselves, so they'll adopt damn near anything if it gives them an advantage in performance., often regardless of the cost as well; so AMD went about supporting it backwards, and decided that it was a "Failure".

You want to know WHY this is Frustrating? Well because they've now released the Infinity Fabric version of HXT., called Infinity Link... which annoyingly is ONLY available for Radeon Instinct, but look at the Bandwidth available for it. 

It's far beyond what even PCIe 5.0 will offer., yet AMD are seemly content to pay a premium to support PCIe 4.0 rather than take the opportunity of their current dominant position to push their own Bus Technology; or even offer it as an alternative for Enthusiasts.

In more laymen terms., an Infinity Link Bus APU (Ryzen with Graphics) could potentially offer near identical performance due to reduction in Memory Latency and Speed Offset via an additional 2 Memory Threads; to a GPU with Dedicated VRAM.

As it stands via PCIe Transcoded., you're loosing about 35-40% Performance of the Integrated GPU due to Memory Limitations. That's a HUGE bottleneck, that no amount of "Better" Hardware will improve.

The only improvement AMD could make and actually make it worthwhile upgrading from Vega 3rd Gen to even Navi 1st Gen; would be to integrate HBCC (a HBM Cache) onto the APU; which would be grossly expensive., as would putting Dedicated VRAM (again HBM) onto the APU.

• 

Anyhow back to Multi-GPU Support.

What DX12 and Vulkan do differently to Crossfire / SLI Hacks., which again keep in mind that the Graphics API in such implementations isn't really aware that you're doing this... is that you are using the Graphics API to dictate how you're using the Hardware.

This has some major benefits. 

Specifically, now the Graphics API knows you want to use more than 1 GPU., it means you can actually control the Render Queue; so even in an AFB approach, you can better Frame Pace.

Micro-Jitter disappears.

But it goes beyond this., as said control also means you can have GPU with very different performance profiles that can work together. So, let's say you run a benchmark when the app starts to see the performance difference; you can then split the Render Output; where by 1 GPU (say your Integrated) handles say the top 5-10% of the Screen; where the Dedicated Handles the Bottom 90-95%... in said approach you have a more tangible performance gain from said approach.

Another aspect is being able to SHARE Rendering Resources., like Memory, Processed Data, etc. 

As such you can for example have one GPU dedicated to Processing the Scene itself; while the other simply handles the Post-Processing... at great usage for the Lower-End GPU is Post-Processing.

There again are much bigger gains you can get from taking this approach... and this is ONLY possible due to Shared Resource Data; which isn't possible via the Traditional mGPU approach.

I'd argue an excellent usage (although we'll see if anyone does this, I doubt it) would be Hybrid Rendering.

The RTX 30-Series has excellent Ray Tracing Performance., and excellent Traditional Performance; but it doesn't actually have particularly good Hybrid Performance.

If for example you paired 2x RTX 3070., you could have one Dedicated to Ray Tracing; the other Dedicated to Traditional. This would result in BETTER overall performance with Hybrid Rendering... than if you did an AFB approach., because of how the Ampere Architecture is designed.

Yet what is even better is that you can Mix and Match GPU.

As we know DLSS 2.0 is objectively excellent technology (and DLSS 3.0 looks to be even better).

Well, what you could potentially do is have say an RX 6800 XT (which will have very competitive Hybrid Rendering) ... but completely lacks DLSS Support; then pair it with an RTX 3050 (assuming NVIDIA do release one, but they should; as it hopefully will be similar performance to an RTX 2060)

This would allow you to use it as a Secondary GPU., specifically for DLSS 2.0/3.0 Support.

You could even offload some other Post-Processing to it... potentially this would provide a FAR bigger performance improvement than simply having 2x RX 6800 XT., and be a damn sight cheaper solution.

Of course it would require Developers to _actually_ bother supporting Multi-GPU, which for some reason most don't. 

Still the potential for some interesting usage of it is there... and it is how Multi GPU will be supported going forward.

It's great NVIDIA have finally decided to Officially cease support for SLI in favour of Multi-GPU.

In reality SLI and Crossfire were dead 3-5 years ago. They are just now finishing tossing dirt on the coffin.

It always was something in that appealed to very few and as high refresh monitors came on the scene no competitive gamers were going to deal with the extra latency and micro stutter that SLI brings.