cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

Methylene
Journeyman III

DRI/3D Acceleration and CAL

What are the limitations?

Will the final drivers support full functionality of the RV670/770 for DRI/3d Acceleration side-by-side with CAL/Stream SDK stuff?

0 Likes
3 Replies
Methylene
Journeyman III

Also... either way I think in the linux environment there should be some easy way in a multi-gpu setup to designate a card as a non-gfx card.  That way one card could possibly function as a full fledged server for some high-performance procedural engine... I'm sure there's many other reasons it would be nice....

Now as far as I know, you can have 2 x server's running... When the next sdk comes out I'd like to experiment with this idea.  I'm really kinda green to linux still (sorry ati I didn't mean green like nvidia... I dropped those liars!)... Anyhow I have been trying to find some information on setting up two x servers each with seperate display drivers.

I know from principle I need to be able to have 2 seperate gfx drivers.  One would be the regular fglrx driver for my HD3200, and the other would be the dev version of the fglrx driver for use in conjunction with CAL/Brook+ and my HD3870.

My real goal for what I'm currently interested in testing, is to have a task-oriented thread system that managed a triangle bintree, calculating noise values for each vertex, as new ones were added.  The server will then in the most efficient way possible, make the data available to be drawn by the HD3200... I'm sure the HD3200 will be a real bottleneck, but for the most part if I used the full potential of the HD3870 in generating the meshes and materials, it would be nothing more than a parser for the scene really.  So a high triangle count may still be feasible if the work to be done on the triangles at that point is nominal.

I realize that most of these ideas could still be accomplished on a per-vertex basis using good old fashioned GLSL, and I'm sure there's some crazy stuff I could do with geometry shaders, but it seems like there is a lot more extensibility in doing it all with Brook+.  I think the idea of using a GFX card as the host for a major portion of our future game engines is an idea that just goes hand in hand with gfx cards already.

I mean why are we doing our expensive computations on our CPUs?  In the tests I've done so far, the limitations were in the ability to process things in parallel.  I ran into situations where even my Phenom 8650 was going, jeez man take it easy on me!

Yet imagine if we could ask our custom tailored noise generators for 4 values at the price of one?  And if all this could go on 320+ times per cycle...  Really, one could just develop a new API for designing games, rather than designing engines and renders...

Oh the detail and expansibility of it all!

 

Keep up the good work AMD, just don't forget that right now you have the gaming industry by the balls with these little gizmos.  Keep pushing for technological superiority, and gosh darn it, give the linux crowd what they want, more support, more gaming.  You have the whole of the open source community that would so lovingly say, "Hey MSFT, go take your buddy NVDA and crash somewhere else!"

I really think the power is in the people... We don't want to buy software, we want to buy hardware, and maybe some games if they rock enough!

0 Likes

Hi Methylene,

What you mentioned in your second paragraph is something I've always wanted to try but really haven't had the time due to my other responsibilities here. I'd love to see how that works out (a dedicated compute "X" server). That one could have the access control set appropriately for compute.

Can you let us (the forum) know how that goes? I'm really interested in see that experiment run. 🙂

Thanks!

Michael.
0 Likes

Hey Mike, and thanks again for the walkthrough post.  See I wanted to clarify that I was under the impression due to the documentation I found, that I would have to install the driver that was included in the directory of the stream sdk files.

That led me to believe that 3D support was not functional (the big TESTING ONLY and circle slash thingy over 3D when using the driver).

So then it begs the question... Can the CAL work while the card is also doing shading/rendering routines?  I imagine due to power limitations and per vertex / per fragment operations would eat even my 3870 alive if I was trying to compute things along side them...

It also kind of blurs the line between shaders and gpgpu functions.  I mean, with vertex and even geometry shaders one can acheive some of the effects I'm looking for, but a truly efficient operation requires things the GPU itself does not implement.  Therefor a thin layer above the GPU could prove very useful, and it could also lead to an engine design that is very state of the art in terms of multithreading.  (oh the thread safety nightmares that will be!)

I've decided my interest is more in applying the CAL system to an optimized game engine "server" like I mentioned.  So I will most likely enable my onboard.  My understanding of X still leaves a little to be desired...

Can't I still assign a device in the xorg.conf without requiring a screen?

If what would be to gain from having two x servers?  Maybe my understanding of glx is a little low too... A context is designed to a device right?  So as long as there are two devices in the xserver I should be able to just use one for compute purposes, no?

At any rate that is my experiment for tonight.  I really hope I can get my feet wet with this stuff... I must admit I've been looking for a more feasible career than game development to start with.  I think going to school to get a better understanding of physics and mathematics, and even getting some real serious programming classes in would be nice.

But this new stream platform stuff really opens some career paths... I mean, being a specialist in AMD stream processors, and the sciences is kind of the career fusion I was looking for .  *Cough* 3D simulation.  Thanks AMD =).

I'd also like to note I got my 1/2 gigaflop solution for under 200 dollars!  That's a real statement!

0 Likes