I've written an application which utilizes the Stream SDK to provide GPGPU acceleration. I'm having problems with the GPU's stability when I run the application.
I've incorporated a way to adjust the size of the data pieces (by "pieces," I mean arrays of numbers) that are sent to the GPU, and there is an overhead for each piece, so the larger the pieces, the better. However, I've found that the larger these "pieces" are, the more unstable the GPU becomes. At times when I am running the application, the screen will freeze, turn black for a second, then come back with a warning that the "display driver has stopped responding and has recovered."
I have no idea what's causing this, and I don't even know where to begin. Is this a defect/limitation in the SDK, or did I screw something up in my code? Are there any known solutions? I'd really love to get this working.
Thanks for any help you can offer.
EDIT: Occasionally the application will report "failed to allocate memory."
This leads me to believe it's a memory leak, but I don't believe I'm dynamically allocating memory for the GPU anywhere.