This question has somehow slipped through unnoticed, apologies. Will have a look at this later today and be back with you shortly.
1 of 1 people found this helpful
There's a couple of things that caught my attention in the source code you provided:
* In l. 683 & 685: vkCmdWriteTimestamp() calls pass VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT through the <pipelineStage> argument. This should be set to VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT instead, assuming your goal here is to ensure the timestamp gets written after all enqueued commands finish executing.
* ln. 801 & l. 812: <dataSize> argument value, as passed through the vkGetQueryPoolResults() call, is set to sizeof(uint32) * 2, which indicates your app is expecting each timestamp to be 64-bit. If that's true, then you should also set the VK_QUERY_RESULT_64_BIT bit in the <flags> argument.
* ln. 801 & l. 812: VK_QUERY_RESULT_WITH_AVAILABILITY_BIT your app passes in the same call lets vkGetQueryPoolResults() leave before the results become available. Hence, it is not true that both timestamps are guaranteed to be reported as available the moment each of these calls returns. For that to happen, you also need to set the VK_QUERY_RESULT_WAIT_BIT bit for the <flags> argument. This will cause the call to block until the timestamp values become available.
Having said that, I can confirm there is a bug in our implementation of vkCmdResetQueryPool(), which causes the <firstQuery> argument value to be assumed to be always 0. In case of your app, this exhibits as a lock-up, because the second "reset query pool" instruction resets both query slots, instead of only the second one. We are going to fix this for the next driver release, or the one that is going to follow. For now, as a workaround, please reset both slots at once.
Thank you for reporting this issue!