cancel
Showing results for 
Search instead for 
Did you mean: 

Graphics

szucsi
Adept I

Re: No Improvement with Crossfire

Thank you for the feedback.

Regarding PSU 1 card has a max usage limit of 175 W from what I see with an optional increase of 20% (I have not done any overclock etc...)

So if the 2 cards are 200 W each the CPU is ~100 W adds up to 500 W.

The rest of 150 W should be enough for the 2 SSD's the 140 mm CPU fan and the 3 120 mm system fans.

In theory

Can you help me with a link on where is best to apply for technical support?

0 Likes
pokester
MVP

Re: No Improvement with Crossfire

One link is in Radeon Settings / Preferences. Another is here: Online Service Request | AMD

ajlueke
Grandmaster

Re: No Improvement with Crossfire

Perhaps you tried this already, but if you make the second card the primary display adapter, does it work on it's own as well, or is it just the first card?

0 Likes
szucsi
Adept I

Re: No Improvement with Crossfire

Yes, I have disconnected the primary card and used the secondary to run a game and it works perfectly.

I have also tested the secondary PCI slot on the Motherboard by having the secondary card in it, this works as well.

0 Likes
bigbopper
Adept I

Re: No Improvement with Crossfire

Thanks pokester for the reply and explanation. Guess I'm late for the dance.

I've been reading about Crossfire and SLi for years and thought it was supposed to be a great application. Regret learning that it's of little use.

I have an ASRock 990FX Extreme9 motherboard, AMD FX-8350 CPU and 16GB ram. The manual says to use PCIE1 and PCIE4 for dual cards and shows they are x16. The manual goes on to state that a third Crossfire board can be put in PCIE5 slot but then the 4 and 5 PCIE slots will run at x8.

However, the Radeon app shows (my two card config with Crossfire enabled) that PCIE4 is running in x8 mode. I emailed ASRock and they explained that in dual-card mode that both slots will run x16 however when Crossfire is enabled both slots run in x8 mode. So another reason I'm not getting all the performance I was hoping for.

I have a PCIE card in slot 5 running a NVME board for a Samsung 960 EVO m.2 hard drive that I boot from. I wonder if that is putting slot 4 into x8 mode but I'd have to remove my boot drive to find out and boot from another drive, yada yada and that is more effort than I want to put in for what will probably be no benefit anyway.

Thanks again for the response. Too often when I post to a forum I get no reply or answer so I appreciate you taking time.

ajlueke
Grandmaster

Re: No Improvement with Crossfire

Hello,

Do you see Crossfire scaling in benchmarks like Firestrike for example?  That benchmark definitely supports crossfire, so if you see identical scores with and without crossfire enabled there is something else going on.  Check your manual on that m.2 drive.  My board has limited PCIe lanes, and if my secondary M.2 slot is populated with a drive, one of the PCIe ports drops to a lower speed.

And it isn't like you shouldn't see any scaling in crossfire, but it is really dependent on the game/engine the game is developed with.  DX12 games are more hit and miss, because if the developer didn't implement MultiGPU, the second card won't be used at all.

pokester
MVP

Re: No Improvement with Crossfire

My pleasure. I think the hope is that with DX12 directly controlling MULTI-GPU that maybe this becomes viable. My concern with that is a fear that developers will still see multi card users as such a minority, that they won't consider it worth their time supporting in their upcoming engines. Even though DX12 supports it, doesn't mean the game will. It is entirely on the developers at this point. Frankly most of them are pretty cheap and games are buggy these days, unless they prove popular long enough they actually get fixed. The bugs in this latest BFV for instance are a disgrace. Still a super fun game that doesn't deserve all it's bad press, but when you open the door for negativity, that's what happens.

0 Likes
ajlueke
Grandmaster

Re: No Improvement with Crossfire

   It isn't really entirely up to the developers, there are actual hardware limitations at play, and the way rendering engines are designed that limit the effectiveness of MultiGPU setups.

  There are many rendering techniques employed now that help games both look better (Multipass AA, temporal techniquies), but also buffer common data between frames so certain elements don't have to get computed again.  This helps games run at a higher frame rate on lower end hardware, but simultaneously creates frame dependencies where information in the previous frame is needed by the subsequent frame .  For a single GPU, that isn't much of an issue, that information can be buffered in cache or VRAM and pulled up again in the next frame, freeing up the GPU to do other work.

  In the DX11 and earlier world of crossfire and SLI those dependencies become problematic.   DX11 and earlier is pretty much limited to alternate frame rendering (AFR), where one GPU renders one frame and the other GPU renders the next, because the API and the applications never see more than one rendering unit, dividing the work has to be done by the driver.  In an application with no frame dependencies, the expected performance would be close to the 100% improvement.

   However, as rendering engines get more complex to produce better visuals on lower end hardware (like in consoles), frame dependencies exist in order to make more efficient use of the hardware.  In an SLI/Crossfire setup, you can then recomputed all the information for each frame, which is what is often done.  You then lose the efficiencies gained by the single GPU setup and wind up with less than 100% scaling.  You can also copy the data shared by the frames between GPUs, but the application needs to signal the driver that certain data needs to be copied between GPUs, as the applications can only see one rendering unit.  This is more difficult to implement, and a lot of engines don't bother with it.

   In DX12 and Vulkan, and application can see all the GPUs on the system, and a developer can decided exactly how to employ MultiGPU.  Here, developers like to employ what is called a rendering pipeline, where one GPU will start a frame and at a predefined point, pass it to the next GPU.  Yet, with these implementations you don't get perfect scaling and a lot of developers don't bother.  That is, as pokester said, because not many users have multiple GPUs, but also because a rendering pipeline winds up being hamstrung by the PCIe bus.  To copy data from one GPU to the next, the data have to pass over the PCIe bus.  That data rate is much to slow to allow one GPU to render half a frame and copy everything it has done to the second.  So the gains you get by pipelining are ultimately hamstrung by the slow interconnect between GPUs.

   In summary, a developer can either develop a rendering pipeline and copy data between GPUs, or completely recompute all information every frame, depending on which produces a smaller hit to frame rates.  If there are enough inter frame dependencies in the engine, the gains from multiGPU setups can be so small, it isn't worth implementing.  You can then get rid of those dependencies in the engine, but doing that will make the engine run less efficiently in single GPU setups in regard to both frames and graphic fidelity.

   There is hope however, NVidia has there NVLink now in consumer cards which massively increases the bandwidth between GPUs.  Products link that will allow developers to build frame pipelining engines that account for the higher interconnect, increasing the benefit of a MultiGPU setup, and may get more developers to implement it, as there would be a tangible benefit.  AMD has their version of NVLink in the works as well, xGMI, which should be able to accommodate the same sort of interconnect bandwidth.

pokester
MVP

Re: No Improvement with Crossfire

I want to start by saying that I know you said nothing to the contrary and what I am saying is because I find most any new tech from Nvidia concerning. I can't deny their implementations are good, and AMD could currently take some lessons from their ability to maker their base happy with driver. However I'd call that NVLink promising, if it didn't also sound PROPRIETARY. Many of the issues we are in today with graphics is because the green team doesn't want to play with others. They want to be a monopoly, they want to dictate technology and how it gets souly to make the money. Like the new Ray Tracing, this isn't a new feature really. It is something that is already in the DX12 spec. Something that can be done on AMD's compute cores. NVIDIA introduced this to control how compute is used on their cards. Not to give a new feature. They want you to buy a much more expensive Quadro if you want compute. All this was, is another PROPRIETARY attempt to control product usage an pricing.   AMD has time and again been and innovator that share that tech with the world openly. So that being said this new tech while sounding great, unless adopted as an open standard. Sounds like yet another stumbling block to sorta replicate or work around. Microsoft should do better at policing this stuff and demand that if they support it, it is an open standard for all hardware makers.

0 Likes
ajlueke
Grandmaster

Re: No Improvement with Crossfire

While I understand your trepidation where NVidia is concerned, NVLink doesn't seemed to be that type of proprietary product.  One of the major issues right now in MultiGPU setups from a programming standpoint, is the bandwidth available to communicate between GPUs.  It really limits the scaling and the rendering techniques that can be used.  NVLink only raises that bandwidth, so now I, as a developer can develop my engine assuming a new threshold on inter GPU bandwidth.

The application won't care how the bandwidth is obtained, but the application will only perform optimally if that amount of bandwidth is present.  So for a AMD MultiGPU setup to give similar scaling they would have to provide similar bandwidth to NVLink, which it sounds like they will do with xGMI.  Of course, the AMD version is baked right into the hardware and requires nothing from the end user.  NVLink you have to buy separately, not included in the price of your $1200 GPU.