Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applications crash without error msg if built/installed with anaconda present #483

Open
hborchert opened this issue Jun 12, 2023 · 4 comments

Comments

@hborchert
Copy link
Contributor

Installing madness on machine with anaconda installed leads to cmake finding MPI_C/MPI_CXX in anaconda directory and resulting applications crashing upon start, need to deactivate anaconda (conda deactivate env) before installing/running madness applications.

@kottmanj
Copy link
Contributor

As far as I recall it works when the development versions are installed (at least on linux):

conda install mkl-devel

What is the error message when the application crashes? Something like: "Intel MKL FATAL ERROR: Cannot load libmkl_avx512.so.1 or libmkl_def.so.1."?

@robertjharrison
Copy link
Contributor

robertjharrison commented Jun 13, 2023 via email

@hborchert
Copy link
Contributor Author

No error message (on macOS) at all, application just crashes before reading input with "zsh: abort moldft" for example.

@kottmanj
Copy link
Contributor

I also confused MPI with MKL ... sorry for that.
However, some issues might be related. I think I don't get the MPI problems in conda envs since I usually deactivate it.
With MKL there are the following fixes, that might work as well with MPI:

export MKLROOT variable before cmake configuration (needs to be exported on runtime as well)

export MKLROOT=/opt/intel/mkl

path needs to be adapted, depending on where MKL is installed. On clusters it often works to reload the module after loading the anaconda module. Then it often resets the MKLROOT variable.

There is also MPI_ROOT (here with underscore as far as I know) that might do the same trick in this case -- although I assume MPI is more tricky. On clusters I would try to reload the MPI module and hope for the best.

Another way is to set the paths to mpicxx and mpicc manually

cmake -D MPI_C_COMPILER=/path/to/bin/mpicc -D MPI_CXX_COMPILER=/path/to/bin/mpicxx .... 

And when running explicitly calling

/path/to/bin/mpirun -n 1 moldft

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants