cancel
Showing results for 
Search instead for 
Did you mean: 

OpenGL & Vulkan

joren_1147
Adept I

[RX6700 XT OPenGL] Compute Shader write into Image3d Error!

Hi, I'm a graphics engine developer, the software uses the OpenGL API, and recently I used a compute shader to do some work.

I used an image3d, the data type is RGBA16F, and the size is 256*128*32. In the compute shader, after getting the result, write it into the texture.

But unfortunately, no matter how I code it, I can only get the correct result of layer 0 (use the Render Doc to see, slice 0, all the results of this layer are correct).

and the remaining 31 layers have no results.

But on Nvidia graphics card, everything works fine!

The code like as follow:

SAMPLER2D(inputTexture, 0);

layout(rgba16f, binding=0) writeonly uniform image3D outputTarget0;

layout(rgba16f, binding=1) writeonly uniform image3D outputTarget1;

layout(rgba16f, binding=2) writeonly uniform image3D outputTarget2;

layout(rgba16f, binding=3) writeonly uniform image3D outputTarget3;


NUM_THREADS(1, 1, 1)
void main(){

ivec3 index = ivec3(gl_GlobalInvocationID);

//call the function, get all the result

vec4 reuslt0,result1,result2,result3;

myFunc(reuslt0,result1,result2,result3);

imageStore(outputTarget0, index , reuslt0);
imageStore(outputTarget1, index , reuslt1);
imageStore(outputTarget2, index , reuslt2);
imageStore(outputTarget3, index , reuslt3);

}

c++ code like that: 

initTexture3d();

createTexture3d();

bindAllTextureSlot(); 

shader.dispatch(256,128,32);

I try to used the image3D with RG16F, and only write R channel and G channel, It's work fine, but RGB16F can not!

I don't understand what's happening inside the driver, If you have any suggestions, please be sure to let me know!

Thank you!

My computer hardware list is as follows:

AMD RX6700XT (Adrenalin Edition 23.Q1.1 2023/4/24)

Intel i7-6800k

0 Likes
1 Solution

Hi @joren_1147 ,

Below is the feedback from the OpenGL team:

"It seems the user didn't follow the OpenGL spec correctly to write the app.

1) I noticed that the user set the parameters of layered to false and layer to 0 < means slice 0> ( glBindImageTexture(0, TH 1: , 0, False, 0, GL_WRITE_ONLY, GL_RGBA8) )  when writing data to image3D through compute shader.

According to the spec: If layered is TRUE, the entire level is bound. If layered is FALSE, only the single layer identified by layer will be bound. When layered is FALSE, the single bound layer is treated as a different texture target for image accesses.

So only slice 0 will be written and the rest of the slices will not and should not be written.

2) 

>> on Nvidia graphics card, everything works fine!

Seems like they are not following the spec because all slices should only be written if the parameter of layered is TURE."

Thanks.

View solution in original post

0 Likes
9 Replies
dipak
Big Boss

Hi @joren_1147 ,

Thanks for reporting the issue.

AMD RX6700XT (Adrenalin Edition 23.Q1.1 2023/4/24)

Could you please try the latest Adrenalin driver available here: amd-radeon-rx-6700-xt ?

Also, to reproduce the issue locally, it would be helpful if you please provide the complete source code of the above example.

Thanks.

0 Likes

OK, Thanks! I'll try the latest driver, If this issue persists, I'll provide a standalone demo in my spare time, which may take some time.

Hi, This is the simple demo for this issue,  https://github.com/YoyoWang1122/AMD_Demo.

When you run it,  Make sure to run the renderdoc.exe to capture the 3d texture's result. You can see the slice0 is fine, other  slices are wrong! 

Thanks!

 

0 Likes

Hi @joren_1147 ,

Thanks for providing the demo. I will report the issue to the OpenGL team.

Thanks.

0 Likes

OK, if you guys have any feedback, please notify me as soon as possible!

0 Likes

I have filed an internal bug ticket to track this issue. Will keep you updated on its progress.

Thanks.

0 Likes

Hi @joren_1147 ,

Below is the feedback from the OpenGL team:

"It seems the user didn't follow the OpenGL spec correctly to write the app.

1) I noticed that the user set the parameters of layered to false and layer to 0 < means slice 0> ( glBindImageTexture(0, TH 1: , 0, False, 0, GL_WRITE_ONLY, GL_RGBA8) )  when writing data to image3D through compute shader.

According to the spec: If layered is TRUE, the entire level is bound. If layered is FALSE, only the single layer identified by layer will be bound. When layered is FALSE, the single bound layer is treated as a different texture target for image accesses.

So only slice 0 will be written and the rest of the slices will not and should not be written.

2) 

>> on Nvidia graphics card, everything works fine!

Seems like they are not following the spec because all slices should only be written if the parameter of layered is TURE."

Thanks.

0 Likes

Yes, This issue is located at this point, I have fixed it! Strict limits are great, I‘m going to write my shader on AMD card all the time, Take off my hat, Thanks to all of you guys!

Thanks @joren_1147 . It's nice to hear that the issue has been resolved now.