I notice that increasing the dimension of the buffer I transfer from CPU to GPU, in the transfer time appear a structure with a double peak far one from the other which one of the peaks agree with the transfer time of lower buffer. The peak that appear, instead is shifted to time 4 time bigger than normal.
I don't understand with with the same data sometimes we have the correct transfer time while sometimes we have 4 time bigger.
nobody see something like it?
In this picture you can se the trasfer time increasing the number of events I process with the GPU (directly proporzional to buffer dimension). How can you see at 10000 events the cpu->gpu mean time increase very much and it have an huge error. This Is due at the presence, only for more than 10000 events, of two popolations of trasfer time: one with mean lower tha 200 microsecond and the other with mean greater than 1ms. I don't understand why some repetition have transfer time about 1ms and someone about 200 microsecond if the imput data is the same for all the repetition?
thanks very much
here there is an histogram of 5000 repetition of cpu->gpu transfer time, is the 3000 events case in the graph before.
why using always the same data for all the repetition we have two peaks in the trasfer time and why it happens only after a buffer dimentions???