Hello mates.
My main area of work is programming microcontrollers. But now other than programming the controller there is a need to create plugins for SCADA(like Vijeo Citect an other) programs for operators who will work on a specific production site.
In addition is the task of creating complex proprietary measuring instruments based microPC or SoC. I am newbie in this things.
Advice please:
1) Do you have experience with the PyOpenCL and PyOpenGL? I want to use it to create visual plugins and addons for SCADA package.
2) Is it possible to work in Python with hUMA . What do I need? Is there a projected acceleration of processing multiple variables?
3) What is the news about the AMD Mantle for Python(3.3)? (I am fan of PC gaming. And I have a couple of ideas that I would like to implement in the future.For now it's just a hobby.)
4) In general, it is advisable to use Python for such purposes or maybe better something a bit more lowlevel programming(C for example)?
Thx.
PS:I apologize for my English. I am from Ukraine and try to write without using translaitors.
Solved! Go to Solution.
I think you should use C/C++. But if you have to use Python, it shouldn't be a big problem. hUMA is mostly a hardware feature, so you can use it through PyOpenCL or Python and there are some AMD slides which mention Python also. You can also mix Python / C / C++, many high performance Python modules (such as NumPy etc.) use functions written in C/C++.
I would personally avoid Python unless it is a requirement. It may be quick to write Python programs at first, but the programs eventually become slower and slower to execute. Few times I have seen people had to rewrite their programs from scratch in C/C++ to avoid complications in mixing different languages also.
About hUMA, I am not sure how smart the implementation is, but I think buffers created by CL_MEM_ALLOC_HOST_PTR flag can be accessed by GPU part directly (map/unmap operations become noop). You can do this using PyOpenCL also.
I have to warn, my knowledge about hUMA support can be flawed. I havent seen any example code or clear documentation from AMD yet. I hope they would put some samples to APP SDK samples. Here is also a thread you may find interesting:
and the SDK guide:
and maybe this:
Seems like no one can say anything at all. Ok. **assuming direct control request**
I think you should use C/C++. But if you have to use Python, it shouldn't be a big problem. hUMA is mostly a hardware feature, so you can use it through PyOpenCL or Python and there are some AMD slides which mention Python also. You can also mix Python / C / C++, many high performance Python modules (such as NumPy etc.) use functions written in C/C++.
I would personally avoid Python unless it is a requirement. It may be quick to write Python programs at first, but the programs eventually become slower and slower to execute. Few times I have seen people had to rewrite their programs from scratch in C/C++ to avoid complications in mixing different languages also.
About hUMA, I am not sure how smart the implementation is, but I think buffers created by CL_MEM_ALLOC_HOST_PTR flag can be accessed by GPU part directly (map/unmap operations become noop). You can do this using PyOpenCL also.
I have to warn, my knowledge about hUMA support can be flawed. I havent seen any example code or clear documentation from AMD yet. I hope they would put some samples to APP SDK samples. Here is also a thread you may find interesting:
and the SDK guide:
and maybe this:
Thanks! I will try Python together with C/C++ first. Then see what is better.