I have been hard-pressed to find time for coding these days, but amidst the frustrations of retooling my noise algorithm by hand, I've managed to begin the process of developing different kernels for different noise types...
Right now I have a multifractal fractional Brownian motion algorithm, and in the coming days I will create other equations to mix and match with the fBm to make a more complicated landscape.
The reasoning for retooling my noise algorithm, was not only just for full understanding of their mechanisms, but in order to have a truly 3 dimensional algorithm, I was going to have to start from the drawing board! I've now successfully managed to create kernels which take input as 3d coordinates (in my case a sphere... however any 3d object can be made "noisy" with this equation) and outputs "noisy" coordinates.
Enough with my incessant jabber!
Please note the texture blandness is because I'm using a very limited color gradient setup and the material is mapped from the vertex colors, also lighting isn't quite working yet because I must somehow calculate normals with ease...
Although I have toyed with the idea of generating texture maps using Brook+, the idea is kind of pointless, as I could make better use of the prexistant programmable pipelines to get the job done. Thus until I get to such a point down the road, this coloring scheme is the best it's gonna get... and lighting will have minimal effect (I just managed to see the light ).
On another note, I had to throw up this picture, as it seemed to have just the right balance to look like a planet at a distance (if planets had such simple coloring!). The last thing I was planning on doing before taking a nice long break was figuring out the alignment situation. It seems there are thresholds in my input values that cause serious grid alignment. It mainly has to do with the integer alignment of gradient noise... This really is more than a choice than a necessity, and at some point I may be required to switch to a more dynamic frequency limiting system, however when a decent balance is met between the input and the input floor values, there appears to be no grid alignment. Another fix which may prove to be necessary for speed increases will be to generate gradient values before hand and index into them psuedorandomly.
Ah decisions decisions!
With the advent of OpenCL there will be no more of a reason to keep my project with the Stream SDK unfortunately, so although I will likely maintain some sort of Brook+ demonstration, most optimization will occur in the OpenCL conversion. However, certain aspects may be easier anyway with a gradient buffer.
Anyhow, without further adou ... my final SS for this post:
Just implemented the timer from the Brook+ common source code, it's saying ~0.118-0.128 seconds to generate 1182722 values. So without ever implementing LOD I can practically already generate every value required to make a decently detailed object. Including read and write times that's around ~0.4 seconds. However with the advent of OpenCL, I have ideas of mechanisms that will always have the data ready as a VBO on the server, and therefor load times will be nullified... As soon as the LOD algorithm requests the values to be generated, poof, the values will appear in place of the inputs in the VBO et voila the new values are in the VBO, and an updated list of indices is used to render.
Currently some others are doing all the same jazz with VBOs and vertex shaders and pixel buffers... I think I have solid evidence that these computation kernels will mesh wonderfully with a good LOD algorithm much better and simpler than anyone could ever hope for with the usual graphics pipeline song and dance. Furthermore, I think it will be incredibly simple to do some work with mesh deformation, ie. take the VBO and modify the points by a deformity mesh using a kernel. The deformity mesh of course would use a kernel to do the physics work. Mmmm one day I'll be hurling asteroids at my little planet and cackling gleefully.