cancel
Showing results for 
Search instead for 
Did you mean: 

Archives Discussions

npm1
Adept II

Blender Cycles(Opencl on AMD GPUS)

Dear Opencl Developer

Why doesn't the AMD opencl compiler work with blender cycles?

whenever i compile the blender cycles kernel, the system either crashes due to lack of memory, or takes too long to compile the blender cycles kernel(which thereafter comes up with the following error:

opencl build failed:errors in console

calclcompile failederror: creating kernel_ocl_path_trace failed!

can't open file c:\tmp\5688.blend@ for writing:no file or directory

).

When is AMD opencl compiler going to work properly with blender cycles?

Why can't the AMD opencl compiler developers test their compiler against blender cycles?

Seasons Greetings,

npm1,

PS i am as well as others(i assume) are considering to make a switch from AMD GPUs to Nvidia.

371 Replies

http://www.subeimagenes.com/img/rory-says-1092186.html

Bigger capture. Sorry for the last one.

Oh son of a bitc... Well, yep. That answers that. Blender on Radeon hardware is, at the very least, a non-priority.

I don't know how to emotionally process this information. I'm angry, that's for certain. I now have multiple 7970's that are wonderful for the games I play once in a blue moon, but useless for the work I do every day. This likewise means that Luxrender's hard work with AMD GPUs is for naught.

I'm also upset because this seems like part of a broader trend in graphics cards. By that, I mean that as GPUs are becoming more important for work-related processing such as OpenCL acceleration, companies are finding that their consumer cards are significantly cannibalizing their professional cards. So instead of lowering their professional card prices or offering more for the same price, the companies are simply crippling their consumer cards. Nvidia has turned this form of artificial product segmentation into a freaking art form. I'm sad to see AMD apparently do the same.

I don't know what I'm going to do. I hate Nvidia, but I feel tricked by AMD.

0 Likes

amartincolby you have is anger and welcome to the group delusion. (amd disappointed). In your case and everyone would have to die again nvidia, amd is the best for my case not return to buy another amd.

Luxcore 2.0 is pretty good, but of course is very slow.

No what amd intend to do with this problem nor tell you to do.

_____________-

amartincolby usted lo que tiene es rabia y desilucion bienvenido al grupo. (los decepcionados de amd). En su caso y el de todos tendriamos que morir con nvidia nuevamente, lo mejor para amd es en mi caso no volveria a comprar otra amd.

Luxcore 2.0 esta bastante bien, pero claro es muy lento.

No se lo que amd pretende hacer con este problema ni tampoco te dicen que hacer.

0 Likes

Does that still come as a surprise? It has been like that since Evans & Sutherland. If you want a professional product it doesn't come as cheap as a gaming card.

0 Likes
cactoos
Adept I

In my country is not easy to find a FirePro card and, is not cheaper either...

0 Likes

http://espanol.bestbuy.com/site/sapphire-firepro-r5000-graphic-card-2-gb-gddr5-sdram-pci-express-3-0...

VS

http://espanol.bestbuy.com/site/asus-geforce-gtx-780-ti-graphic-card-954-mhz-core-3-gb-gddr5-sdram-p...

__________

That fan is dumb enough to buy something more expensive, and it does not work?

Is this what you intended amd ?.

Few days ago I thought, that if amd catalyst to give his opensource OpenCL Gallium3D RadeonSI, one might try to put together something a little faster and sensible.

............

Que entusiasta es tan tonto como para comprar algo más caro, y que no funcione?

Es esto lo que pretende amd?.

Hace unos dias pense, en que si amd diera  su opencl catalyst para opensource RadeonSI Gallium3D, se podria tratar de armar algo un poco mas rapido y sensato.

0 Likes

It is a shame that AMD can't fix this on their side. But the reality is that they are not going to.

I don't mean to start a flame war here, but why can't the Blender developers refactor the opencl code

to make it work with current AMD drivers and cards?  When people say that the code can't be easily

broken up into pieces, this makes me think that it is a big ball of spaghetti code that *should* be refactored.

And yes, there is a risk that it will still be broken after the refactor. But, at least it will allow us to identify

where the broken code is. And perhaps we can get 90% of the functionality working properly, and this will

be good enough. 

I wish a fraction of the energy devoted to this loooooong thread were devoted to working on the Blender code.

Aaron

0 Likes

Trust me, you haven't seen a flame war until you've seen Blenderartists threads on Gooseberry.

Regardless, as I understand it, it's not just Blender, it's everyone who is trying to get a render kernel operating on AMD cards. That applies to Indigo and Luxrender as well. Indigo's GPU render gets around the problems with a sort of hybrid system where the CPU is constantly feeding the GPU data. This works but is scarcely faster than a good CPU. Luxrender is trying the same thing with hybrid CPU+GPU rendering to maintain its full feature set, but as I can attest to with my own experiments, AMD cards are so slow in this mode that again a plain CPU is usually better.

Also, in defense of Cycles code: render engines and the way that the closures operate are a special beast. Obviously, I have a cursory understanding of the whole thing, but it's hard to simply turn off features, which is why the rendering engines that support GPU processing all coincidentally lack the same features: motion blur, volume rendering, etc. — these are the only features that can easily be eliminated!

And finally, Ton has made a number of statements regarding AMD cards about how kernel failures are not consistent. They have error handlers ready to go and they can not zero in on any specific problem. This implies that the issue is broadly architectural.

0 Likes

Apple seems to have no problem writing OpenCL kernels using AMD GPGPUs. Is there a lack of understanding how to adapt rendering engines to leverage OpenCL, and/or a lack of examples on how the big names do it?

More pointedly, PIXAR has a plethora of Mac Pros and Linux systems using OpenCL and AMD cards. Autodesk, [Maya as one example], Pixelmator and more leverage it and optimized their OpenCL stack with the Mac Pro and the D series FirePro systems, not to mention the rest of the OpenCL supported Mac system. Does the issue reside with the piss poor current state of OpenCL on Linux and therefore everyone is complaining all is lost? If that's the case, and with billions being thrown at Linux should not Linux make OpenCL a priority, whether on AMD, Nvidia or Intel, nevermind FPGA, etc?

Mac Supported OpenCL 1.2 systems listed: Mac computers: OpenCL and OpenGL support in OS X Mavericks'

Nvidia doesn't support OpenCL 1.2 out of the box on their systems, but Apple does for OS X. These aren't limitations of hardware, but resources to implement the stack on Linux.

0 Likes

Sorry, but where we arrive to this discussion. Answer: buy Nvidia finally arrive at the problem. In my opinion what most infuriates, is not given a clear answer to radeon matter. At issue blenderartist +1800 posts without one of the solution, at the moment the post is literally dead.

Things are very clear ... AMD Vs NVIDIA Choosing The Right GPU - YouTube   ...

Ladies and Gentlemen, is not discussed more, amd not care of the problem will be made.

The most comical is that nvidia has titan z amd radeon 295x at half price, but also performance.

"So how economical is expensive"

At the time I thought amd with "firepro" mantle could get something like "mantle 3d" for this type of problems. The other thing is that no company is interested in this type of radeon cards renders. Octane render, arion, ......... ect.

A couple of years it only for gaming amd comment? Yes

_________

Perdón, pero a donde llegaremos con esta discusión. Respuesta: Llegaremos a comprar Nvidia fin del problema. En mi opinion lo que mas rabia da, es que no se da una respuesta clara al asunto radeon. Al asunto de blenderartist  +1800 mensajes y sin uno que de la solucion, en estos momentos el post esta literalmente muerto.

Las cosas son muy claras ...         ...

Señoras y señores no se discuta mas, amd no se hara cargo del problema.

Lo mas comico es que nvidia tiene titan z y amd radeon 295x  a mitad de precio , pero tambien de prestaciones.

"Asi que lo economico sale caro"

En su momento pense que amd con mantle " firepro" podria sacar algo tipo "mantle 3d" para este tipo de problemas. Lo otro es que ninguna empresa se  se interesara en este tipo de tarjetas radeon para renders. Octane render, arion,.........ect.

Hace un par de años lo comente amd solamente para juegos? Si

0 Likes

There are a large number of OpenCL apps that work beautifully on AMD cards. Adobe's OpenCL acceleration works well, as does Vegas Pro, Davinci, and on and on.

As I said before, I am limited in my knowledge of the problem's specifics, so don't quote me as an expert.

The hunk of code that gets loaded onto the graphics card for a renderer is much larger than the above applications. The above applications usually only accelerate one or two features in OpenCL, be it color correction or image scaling, and as such only need to load a small amount of code into the graphics card's memory. Similarly, the math that needs to be calculated for those applications is also simpler.

I think a good deal of confusion is arising because companies are talking about using OpenCL for a variety of uses. When we talk about OpenCL vis-a-vis Blender, we are talking ONLY about Cycles rendering. Pixar's usage of OpenCL does not utilize the full renderer. And in cases where a large kernel needs to be loaded onto the GPU, Pixar uses almost nothing but Nvidia and CUDA. Actually, according to Pixar's presentations at Nvidia events and commentary on Blenderartists from some Pixar employees, almost all of their workstations run Nvidia.

And on the subject of Linux: if there is an issue with an AMD card on Linux, I blame the AMD drivers. The performance difference between Nvidia and AMD on Phoronix is huge. It's getting better, but it's still huge.

0 Likes

Thanks for dispelling some of my ignorance on renderers I used to write render engines for the CPU back in the day,

but I had forgotten about the special nature of their design. 

0 Likes

Boxerab, I apologize. I absolutely did not mean to imply that you were ignorant of anything. I also did not mean to try to stand above you or talk down to you in any way. I simply had information and barfed up that information. I like to defend Blender and its staff because a great deal of shade gets thrown their way by the community, with some people saying their priorities are wonky or they're simply bad programmers. I think that they are doing a good job. I likewise think the people at Luxrender, considering there are really only a few of them, are doing a fantastic job and truly think that the blame lies almost entirely with AMD.

0 Likes

@martincolby no offense taken! Thank you for providing information and helping me to understand the problem more fully.

So, in a nutshell, the AMD cards cannot handle kernels over a certain size and complexity. And rendering requires large, complex

kernels.

Actually, I have been working on some OpenCL kernels that target AMD cards, and I try to keep them as simple as possible,

because I noticed compile times rise considerably if I had too many structs and nested function calls.

One interesting feature in OpenCL 2.0 is dynamic parallelism: one kernel calling another kernel without requiring the host.

So, perhaps, this will alleviate the problem. However, I would guess that this will only work for Hawaii cards and newer.

0 Likes

Well, I'm glad I erred on the side of caution, anyhow. As you said, it's very easy to start flame wars online when things like context, facial expression, and tone aren't available. I want to make sure that I am never that guy.

You are describing the exact same issue as everyone else: ballooning compile times. It also eats up huge amounts of memory during the compile, with cards running less than 3GB being non-functional. Apparently, a few people have gotten the full Cycles kernel to compile on cards with 4GB of memory or more, but I don't know much about that. All I have is my own experience on a 3GB 7970 Ghz Edition, where the compile would always fail.

OpenCL 2.0 is very exciting. The parallelism is part of unlocking a renderer's full feature set, which is uh-may-zing. I cannot wait for volume rendering on GPU. I recently did an animation that had fog everywhere; 640x360 resolution; four computers, a total of 24 cores; eleven days of rendering. Eleven. Days. I have hope even though I'm very squiffy on the details. And also, I'm not sure if OpenCL will get volume rendering considering that CUDA has DP since CUDA 5.0 and Iray, Nvidia's flagship GPU renderer, still doesn't support volumes. Lord, I don't know. It's all a mess. All I know is that Nvidia is stable, AMD is not.

And really, that's all neither here nor there. I'm glad that I didn't insult you and I'm sorry that your compiling isn't going well.

0 Likes

@amartincolby your civility is very much appreciated!  And thanks for the good wishes. 

Regarding volume rendering, I would be tempted to come out of retirement and write

an opencl volume renderer once OpenCL 2.0 gets released for AMD.  It would be very cool.

0 Likes


Volumetric rendering now enabled for Cycles on GPU
Since the newest CUDA is performing quite well with activated volumetrics on GPU, Thomas Dinges enabled the option now for   #b3d  Blender Cycles.



That means that from now on, we can render volumetrics on GPU! Please note though, that:
- Smoke/Fire rendering is not supported on GPU yet
- Decoupled Ray Marching is also not supported yet, so no Equi-Angular and MIS sampling yet


0 Likes

Professional cards are difficult to find in many markets. For example, Blender and Nvidia Geforce cards are very popular in India because Quadro/Firepro cards and Autodesk software like Maya are hard to get and support and cost a FORTUNE. For example, the AMD w5000 (a rather basic card) costs $399-$340 in the US. It costs the equivalent of $620 in India.

0 Likes

I have a Nvidia Quadro 4000 which works great for low poly but maya doesn't support sli support so dropping in 2 more didn't affect my workflow. I put in multiple r9's because I was told cycles could use multiple cards to increase productivity.

I believe thats the frustration, with each driver productivity is dropping and we have all have a vested interest in the success of radeon to help us achieve our goal.

Since the new xbox and playstation now use amd graphic chips, I assumed more capital was pushed twords making drivers more efficient . But does that mean its taken away from the development of the usefulness of these cards? If Mantle is being pushed to developers is it going to be released as a standalone so that we may continue to be productive? At this point I believe not.

So cycles is our best hope to have professional output with the radeon equipment that we now own and talking about migrating to Nvidia will not help our situation.

Is Bob back from vacation?

Your Roving Reporter

Raul Diaz

0 Likes

Incredible continue to hope, is 3 years of waiting.

From my point of view cycles is the most supported and requested more. If bob back from vacation, we like to comment on this thing, but we have since June hoping for some improvement. We have dedicated gpu drivers Catalyst only APUs.

0 Likes
cusa123
Adept I

0 Likes
idenoh
Journeyman III

I have been a silent party to this situation ever since it became an issue in 2011, as well as problems with OpenCL in other programs (Like sony vegas), I never really felt the need to create an account and chime in, but I couldn't resist at this point. Is AMD Seriously abandoning its loyal customers in favor of catering to the professional market? I get that the mac pro has the fire pro cards and they want to ensure the best apple experience, but I find it insulting that someone like myself who is trying to get into animation would be required to buy a professional grade card that would perform around what nvidia can pull off. I've been a very long time ATI/AMD customer ever since I  got my first computer with a graphics card (ATI Rage 128 Pro) and I have since *never* considered changing to nvidia. Not only did AMD have better pricing, they had less hostile policies regarding their products. But after sitting on my...lets call it a hobby since its been held back for so long, If AMD is really going to focus solely on fire pro I have no option at this point, I can get a nVidia 970 for $299 this month and get fantastic performance with cuds, or I can wait even longer for my 7970 to start working properly, I can't wait forever guys...This is really upsetting to me, because I *want* to support you guys as a company, I rep you every chance I get, and this feels like a stab in the back...Depressing stuff man.

0 Likes

In my case would vote to close the post (blender amd). I also saw a note firepro ..http://www.fxguide.com/fxguidetv/fxguidetv-193-in-depth-with-arnold-creator-marcos-fajardo/ .. the models are in the test mode. Also watching the values ​​of the new nvidia does not call my attention that exceed any amd. Too bad that recently a report amd sack in 5 years so we photorealistic games, but that happens to have a decent render engine no longer makes sense this.

0 Likes

Not only can you buy the GTX 970 for about $300, its Luxmark Sala score is just about even with the 290x, as you can see in these charts: http://wccftech.com/nvidia-maxwell-geforce-gtx-980-geforce-gtx-970-performance-numbers-leaked-gtx-98...

That means that Nvidia has finally started taking OpenCL seriously in their driver implementation. The Luxmark score for the GTX 980 is over 16% higher than the 290x. Unless AMD's 390x is a huge leap in OpenCL performance, one of their biggest advantages has been eliminated.

I'm really quite upset. I want to support AMD and hope them well, because we all know that if not for the success of the 290x and the huge success of the 295x2, Nvidia would be the same ol' arrogant behemoth that it so loves being and they'd be charging $277,000 for the new 970 and 980. AMD needs to succeed.

That said, my work outweighs my desire for competition. This is a perhaps a short-term view to take, but being a graphic artist is a short-term business.

0 Likes

Work is *always* going to outclass brand loyalty, a few months ago I had very high hopes for this situation, we were promised a fix "soon"...I'm guessing their fix didn't pan out and bob likely was told to stop tracking this or is no longer with the company. I agree with you 100% though. AMD Has always been my favorite for a number of reasons. They aren't the most powerful most of the time, and they don't always have the best features...but they're the most fair. They make sure that everyone has access to their technology and that is what needs to happen in the gaming market. Unfortunately they drag their feet like a 6 year old told to clean their room when it comes to their drivers. And maybe this is something that *can't* be fixed by drivers, but at least tell us that so I can expect to upgrade to the next generation cards without worry.

But I can't wait, I just got accepted into a youtube partner network for animation, and my channel is lacking just that because...while my 8350 fx cpu is fantastic its nowhere near as fast as gpu rendering, and I can't donate 90% of my time to simply rendering anymore. If AMD wants to salvage the situation a hail mary would be perfect right about now, let us know whats going on and stop keeping us in the dark pumping out more and more firepro features; its disheartening.

0 Likes

@amartincolby The Invidious improvements on Luxmark are great news. Now we really have two competitive high-end discrete GPUs.

Previously, I was resigned to only support AMD cards, because geforce perf was so poor. Also, do you think AMD would behave any differently than Nvidia

if they were in Nvidia's place?  I wonder. It seems that nasty companies like Nvidia and Intel do survive, while hapless AMD is teetering on the brink for the past 8 years.

I agree: I want AMD to succeed.  But, for my application, I am going to pick the best price/perf ratio. 

0 Likes

Seriously AMD is difficult to support despite they good gaming performance or luxrender performance ! we need tech that work fine ! now amd have no advantage maxwell shine at both opencl and cuda with good price and very good perf/conso ratio ! 160 watts for Titan black rendering performance !

AMD is dead !

0 Likes

What is more annoying is this lack of information ...

I see many people defending AMD as a company, but honestly, today I do not see it much better than the Intel or Nvidia ...

0 Likes

I don't think it makes sense to consider anything else than a Nvidia card if you need it for a 3D content creation package like Maya or Softimage.

0 Likes
cusa123
Adept I

As we are realizing that we are talking among ourselves. Lie do not have any data or amd someone to tell us something. In this post amd already have abandonment bob maybe permanent vacation !.

0 Likes
cusa123
Adept I

bye amd and nvidia'm about to buy the 970.

I do not want more for amd mourn and struggle upstream.

Do not know what else to say or want to say no more headaches. A real shame, but gpu 340 dollars at this level !! which offers you everything. Ready amd hope you can reconsider someday.

bye Bye

0 Likes
cusa123
Adept I

In my case eh I tried everything from 2.69,2.71,2.72. With a 7870 or there works, but not me daja catalyst using ccc, you uninstall everything, reboot and try again and nothing. But use LuxRender and it works but I see improvements. With cycles does not compile or crashes. Another question if amd catalyst could do to walk with OpenCL 2.0 cycles probably would hold me to buy a gtx 970 by a r9 300 but hopefully it will be before year end.

_________

En mi caso eh probado todo, desde 2.69,2.71,2.72. Con una 7870 ni ahi funciona, es mas no me daja usar catalyst ccc, lo desinstale todo , reinicie y trate nuevamente y nada . Pero use luxrender y funciona pero no veo mejoras. Con cycles no compila o se cuelga  . Otra pregunta si amd pudiera hacer andar catalyst con opencl 2.0 en cycles seguramente me aguantaria a comprar una gtx 970 por una r9 300 pero espero que sea antes de fin de año.

0 Likes
cusa123
Adept I

Codexl 1.5//+ ( blender 2.72) amd catalyst 14.41 opencl 2.0

Functions with a high sample count usually indicate performance bottlenecks. Sort the table according to a specific metric to highlight potential bottleneck functions.

0 Likes

Stop to try help this unfair company ! they can't understand what we are telling to them :

Now it's clear they are dead on CPU market and GPU market:

Maxwell kill them on all area:

2X power efficient

15% faster in gaming

more powerfull in gpu compute

work CUDA

GOOD at Opencl

They opencl 1.1 implementation is far far away from amd last opencl 2.0. plus they are cuda designer.

0 Likes

Remember, incompetent or not, AMD is our only bulwark against Nvidia domination. Are they being annoying? Yes, but we need them to succeed.

And also, I've been building computers for twenty years. I will always have a fondness for the first CPUs that I overclocked. And anyone who remembers the unearthly overclocking prowess of the old Thunderbird CPU will likewise have a soft spot on their hearts for AMD.

0 Likes

@amartincolby totally agree!! And I am eagerly awaiting AMDs response to the 980/970. Rumor mill says 20 nm and 3D stacked memory.

Bring it!

3 years no news ! no good news ! they seriously don't care about any of us . I ready to build a personal GPU renderfarm only god know how much I will be gratefull if it's AMD based . I now that AMD is a great company but I Think they dn't care about semi or professionnal customers.

Today a simple gtx 780 beat the w9100 at vray with 40% faster rendering. the gtx is 3go and the firepro is 16go the gtx is 500 euros and the firepro is more than 3000 euros. they dn't give us any news to think that we will have a solution ASAP !

0 Likes

You do not try to think that amd fix the problem.

You yourself mentioned, 3 years is already more than enough.

To make clear to all OpenCL 2.0 (will not be the solution).

Amd cpu now also having trouble with blender 2.72 kernel cycles.

_____________

Edit: New catalyst 14.9 / 14.9.1 not work with some features. Progress was recoils.

0 Likes


cusa123 wrote:



You do not try to think that amd fix the problem.


You yourself mentioned, 3 years is already more than enough.


To make clear to all OpenCL 2.0 (will not be the solution).


Amd cpu now also having trouble with blender 2.72 kernel cycles.


_____________


Edit: New catalyst 14.9 / 14.9.1 not work with some features. Progress was recoils.


AMD isn't hand holding Blender to get their OpenCL stack current, Blender doesn't like OpenCL, never mind their out of sync with OpenCollada, OpenShading and more. Just read the back handed comments in their code commits. Sorry, but Blender not jumping on board and making OpenCL a first class citizen is only to the detriment of their goals of being an industry level solution.

LuxRender 2.0 looks like an obvious solution for Blender, until Cycles is OpenCL mature.LuxRender • View topic - LuxCore: materials/textures compilation in OpenCL code

AMD OpenCL C compiler

AMD OpenCL C compiler is well known to be often unable to even compile complex kernels. The new dynamic code generation doesn't solve all problems but rise the bar a lot. For instance, this is a scene that has never worked with HD5870:

Now it works and the kernel compilation requires only 6secs.

I'm still unable to render this scene on the HD7970:

Code: Select all
Error:E013:Insufficient Private Resources!




I may even hit some hardware limit here. Up to now, it is the only scene I'm unable to render with the new code.



New and old code path



The new dynamic code generation is now enabled by default. The old one is still available and can be re-enabled with the following properties:




Code: Select all
opencl.kernel.dynamiccodegeneration.textures.enable = 0
opencl.kernel.dynamiccodegeneration.materials.enable = 0

jeanphi wrote:Have you tried to compare native CPU code with the new dynamic OpenCL code on CPU?

PATHOCL on CPU device is still a 40% slower than native C++ (or about a 25% if I fine tune some parameter). However, I'm quite sure it is only a problem related to the grain of PATHOCL parallelism: it is too fine grained for CPUs.

As proof, BIASPATHOCL (where I can use tile sample-per-pixels parameter to control the parallelism grain) is consistently faster than BIASPATHCPU. This is the monkey scene with native C++:

OpenCL CPU is 11% than C++ in this scene  :D

It shouldn't be hard to add some control of parallelism grain to PATHOCL to achieve the same result.

=============================


This is progress and obviously the LuxRender team is focused on getting their renderer ready for release to be a drop-in for Blender, Maya, etc.

All I see from Blender is whining that their architecture works more inline with CUDA and not OpenCL.

Never mind the fact the likes of PIXAR and Disney Studios, Adobe, Sony, Apple and many more are all in with OpenCL, it must clearly be the sole issues of AMD that are holding back Blender from being an industry heavyweight.

Priorities of sending people to complain for the lack of OpenCL in Blender working, on par with CUDA must be what they must be.

0 Likes

@mrdriftmeyer I agree - tired of hearing all the moaning and groaning. Not all of the blame can be pinned on AMD.

.If it is such a big problem for some people, just fork over $400 for a 970, and use the Cycles CUDA back end, until AMD drivers mature.

0 Likes