So I develop an OpenGL rendering and analysis tool for large-scale lidar data (VS2010, Windows, C++). I've discovered a problem lately where if I load up a large amount of data relative to GPU RAM - say 4-5 Gb worth of VBO data on a machine with 2 Gb of VRAM - then everything renders fine but I get a crash in the AMD driver as soon as I attempt any action which would open a secondary openGL window. Basically once I pass some threshold amount of VBO data it immediately crashes upon any call to wglCreateContext. I've tested on a variety of machines, and the common factor is always AMD drivers. We never get the crash on either Intel or Nvidia configurations. What's strange is, if I load up say 500 million points worth of data everything works fine on all platforms (actual numbers on any given machine appear loosely correlated to GPU RAM). Add another 100 million points and rendering is still fine. I can zoom about through the data with no issues as long as I like - its not having any intrinsic issues managing the LOD structure, the VBOs, my shader programs, etc. Its handling the hard part beautifully. But the instant I attempt to create a secondary window to display a histogram or something I get a crash. A try/catch block around wglCreateContext doesn't accomplish anything. CodeXL itself crashes at the point of my app crashing, so its not providing a lot of helpful feedback either. I'm kind of at a loss how to trace this one down.
Note that altering my LOD algorithm to produce smaller numbers of larger VBOs doesn't appear to have any affect on the issue.
I'm pretty much at the point of tracing the entire OpenGL state at the point of calling wglCreateContext to see if there is something funky being set somewhere that the Nvidia/Intel drivers ignore. But I don't understand if that were the case why it works fine for lesser amounts of data. And its clearly volume of data and not content - completely different data sets run into trouble at approximately the same amount of VBO data.
Thanks for any ideas,