Hi @swarup,
thank you for your assistance.
Here is requested information
Here is the link to measurements/stdout/stderr/batch script
> https://gigamove.rz.rwth-aachen.de/d/id/izALLuqh67mye2
Hi @izhukov,
Thank you for providing the details. We will get back to you with our findings.
Hi @izhukov,
We analyzed the issue. The returned value of one of the ParastationMPI API (to get the env variable value) caused the issue. The issue is only limited to uProf command line, whereas the uProf GUI application works fine. We have the fix for this and should be available in the next release of uProf. Mean the time, you may use to view the report using the GUI. After translation, you should see a .db file in the output directory. Launch the GUI, go to HOME > 'Import Session' > 'Import Profile Session' > 'Profile Data File' > Browse and choose the .db file > Press 'Open Session' button. This should generate report in the GUI.
Let us know if this works for you.
Hi @swarup,
thank you for providing a workaround. And I'm looking forward to a new release.
I do not think it is related to ParaStationMPI, as it crashes with IntelMPI and OpenMPI too. I can provide error logs if you wish.
I have additional questions regarding GUI usage. I understand that it is out of scope of this post, but it is still related to the same measurements and the same setup. Let me know if it is better to create a new post for these questions.
Here are the questions (GCC+ParaStationMPI testcase)
1) I do not see MPI routines called from user code, although "--mpi" was enabled. Are they intercepted?
2) I do not see OpenMP as I do not compile with Clang. Do you plan to change it in the future and enable it with other compilers?
3) "-g" flag was provided to AMDuProfCLI to enable call graph, but it is empty (see picture). "adi_" should include many others functions.
Hi @izhukov ,
We could not observe the crash using OpenMPI. It would be helpful if you can share the error logs for IntelMPI and OpenMPI for analysis. Regarding your other queries:
1 & 3) Please use '--call-graph fpo:512' option with 'collect' command instead of '-g' to get a better callstack. User guide will have more info regarding '--call-graph' option.
2) GCC 10 does not support OpenMP 5.0 completely. As soon as the next release of GCC comes with the required support, we will enable it for GCC as well. Right now, OpenMP tracing is only supported on single node. In the next release we will be supporting on multi-node setup.
Hi @swarup,
thanks for prompt reply.
Please see error logs here (in filename suffix first letter 'i' stands for Intel compiler and second one stands for MPI implementation i=IntelMPI, o=OpenMPI). I noticed that crash happens with "assess" and completes successfully for "tbp" and OpenMPI.
'--call-graph fpo:512' option helps to see user functions in callgraph/flame graph, but there are no MPI in the calls. Is there any way to sort columns in the callgraph table like in metric pane?