I thought of a followup. I believe the X Box One only has 2 ACE and the PS4 has 8 ACE. Do you think the limited amount of ACE on the XB1 might be one of the reasons the devs would hold back on using a heavy amount of Async Compute? I also remember reading a comment where saying that 8 ACE was a little overkill.
I think devs treat each platform's capabilities independently. Especially if a dev is using DX12, that's a very PC-focused decision, and it follows that they care about PC and making good decisions for that.
I recently build an all red computer; I combined an A10-7870k with an R9 Fury X to play around with OpenCL, DX12, and (eventually) Vulkan. My motivation is to write an engine that makes use of the compute power of the APU for low latency tasks and put the rest onto the dGPU. This seems like an obvious winner to me, but I'm curious why AMD isn't marketing their APUs in this manner and instead pushing them into the embedded space. It seems that features such as Explicit Multi GPU and HSA capabilities should push APUs to eventually dominate the CPU space. Is the tech too early to market properly or is this simply not what the future holds? Is DX12 already in a place where we can effectively combine the power of an iGPU and dGPU?
You said that DX12 allows a graphics card to run compute and graphics workloads simultaneously. Will it result in higher power consumption and will it make the card more or less energy efficient?
We have noticed no change in GPU power consumption, but APU power consumption tends to decrease a few watts. But more work is accomplished per watt of power consumed, so that's a perf/watt improvement.
We've put quite a bit of weight into promoting DX12 explicit mGPU. Allowing the APU to function as a graphics co-processor for the discrete GPU, that receives appropriately-sized workloads, is awesome. Long story short, I agree: it's important, it's a winner, and DX12 is there.
I had a reply to this, but don't know where it went.
We bypassed the need for a TrueAudio SDK by building plugins and support directly into the off-the-shelf audio engines licensed by game devs. In the audio space, this is generally easier than giving them an SDK and telling them to design their own plugins for an engine they've already licensed.
In the future will we be seeing any changes to how GPUs are designed? Will there be DX12 specific features/extensions or any other hardware change that'll allow DX12 to squeeze even more performance out of a GPU?
New DX releases have a feature list that's a balance of what each of the gfx vendors have today, and what they'll have tomorrow. Microsoft, AMD, Intel, NVIDIA and some top game devs participate in that negotiation to define the ultimate spec.
I would say the relationship between DX and hardware is symbiotic: the hardware informs the software which informs the hardware, so on ad infinitum. So there is hardware in GPUs today that required DX12 (or Mantle) to unveil. And there are features in DX12 that will be better off with new hardware down the line.