I'm using Brook+ to generate terrain with my HD3870. It has GDDR4 rather than 3, and I'm quite sure it's helping .
First of all, the current state is very rudimentary, I started this project again about 3 days ago(after an accidental drive format a while back... and well laziness/fighting with reinstallation with new and improved RAID0 ).
So essentially all that happens in the code, is a very highly filtered/accumulated noise value is generated for every point in a x by y grid. The output of course is an array of height values. This output is generated only once, then compiled to a display list, and rendered repeatedly... Surprisingly enough, the heightmap dimensions I'm capable of calculating in less than one second is frighteningly more than my gpu can ever hope to be able to render!
I generally use a 2048x2048 grid resulting in of course 4,194,304 vertices and 4,190,209 quads!! this of course results in a very unacceptable ~0.25 fps (still no implementation of an fps counter ). I've generated a grid 4096x4096 in about a second or less... Those numbers are too scary too post... Just bear in mind these fractional times include a good 16 attenuated levels of noise.
I'm planning on implementing a caching / LOD algorithm to generate large queues of vertices for the GPU to process.
Anyhow enough with boring explanations, here's a couple of simple renders.