Project

General

Profile

Issue compiling pydusa for Appion manual install on RHEL 6.7

Added by Mark Watts about 8 years ago

Hi,

I'm following the Appion manual installation instructions on a RHEL 6.7 x86_64 Virtual Machine.
I've reached the "Install MyMPI" step here: http://emg.nysbc.org/projects/appion/wiki/Install_EMAN2

My first issue is that "module add openmpi-x86_64" doesn't work, I have to do: "module add openmpi-1.10-x86_64".

Second, there is no "pydusa-1.15es.tgz" to download, only "pydusa-1.15es-fftmpi-6__2016_09_07.tgz"

When I try and compile this, I see the following error:

[root@srv01746 pydusa-1.15es-fftmpi-6]# export MPIROOT=/usr/lib64/openmpi-1.10
[root@srv01746 pydusa-1.15es-fftmpi-6]# export MPIINC=/usr/include/openmpi-1.10-x86_64
[root@srv01746 pydusa-1.15es-fftmpi-6]# export MPILIB=${MPIROOT}/lib
[root@srv01746 pydusa-1.15es-fftmpi-6]# export MPIBIN=${MPIROOT}/bin
[root@srv01746 pydusa-1.15es-fftmpi-6]# module add openmpi-1.10-x86_64
[root@srv01746 pydusa-1.15es-fftmpi-6]# ./configure --with-mpicc=/usr/lib64/openmpi-1.10/bin/mpicc
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... none
checking for g77... g77
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether g77 accepts -g... yes
checking for main in -lm... yes
checking for ranlib... ranlib
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for long int... yes
checking size of long int... 8
checking for python... /usr/bin/python
checking for python include directory... -I/usr/include/python2.6
checking for --enable-fortran... no
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating lib/Makefile
config.status: creating mpi_tests/Makefile
config.status: creating workit/Makefile
config.status: creating check/Makefile
config.status: executing depfiles commands

========================
Configuration Parameters
========================

Python                   /usr/bin/python
MPI C Compiler:          /usr/lib64/openmpi-1.10/bin/mpicc
MPIRUN:                  /usr/lib64/openmpi-1.10/bin/mpirun
CFLAGS:                  -g -O2 -DLONG64=long -fPIC -I/usr/include/python2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include
LDFLAGS:                 -shared -L/usr/lib64/openmpi-1.10/lib -lmpi

[root@srv01746 pydusa-1.15es-fftmpi-6]# make
Making all in src
make[1]: Entering directory `/opt/pydusa-1.15es-fftmpi-6/src'
Making all in .
make[2]: Entering directory `/opt/pydusa-1.15es-fftmpi-6/src'
make[2]: Nothing to be done for `all-am'.
make[2]: Leaving directory `/opt/pydusa-1.15es-fftmpi-6/src'
/usr/lib64/openmpi-1.10/bin/mpicc -g -O2 -DLONG64=long -fPIC -I/usr/include/python2.6 -I/usr/lib64/python2.6/site-packages/numpy/core/include -c mympimodule.c -o mympimodule.o
mympimodule.c:99:23: error: fftw3-mpi.h: No such file or directory
mympimodule.c: In function âmpi_attr_getâ:
mympimodule.c:527: warning: âMPI_Attr_getâ is deprecated (declared at /usr/include/openmpi-1.10-x86_64/mpi.h:1227)
mympimodule.c: In function âmpi_comm_spawnâ:
mympimodule.c:670: warning: cast to pointer from integer of different size
mympimodule.c:672: warning: cast to pointer from integer of different size
mympimodule.c: In function âmpi_open_portâ:
mympimodule.c:717: warning: cast to pointer from integer of different size
mympimodule.c: In function âmpi_comm_acceptâ:
mympimodule.c:753: warning: cast to pointer from integer of different size
mympimodule.c: In function âmpi_comm_connectâ:
mympimodule.c:771: warning: cast to pointer from integer of different size
mympimodule.c: In function âmpi_win_allocate_sharedâ:
mympimodule.c:1638: warning: assignment from incompatible pointer type
mympimodule.c:1639: warning: passing argument 1 of âPyTuple_SetItemâ from incompatible pointer type
/usr/include/python2.6/tupleobject.h:43: note: expected âstruct PyObject *â but argument is of type âstruct PyArrayObject *â
mympimodule.c:1640: warning: passing argument 1 of âPyTuple_SetItemâ from incompatible pointer type
/usr/include/python2.6/tupleobject.h:43: note: expected âstruct PyObject *â but argument is of type âstruct PyArrayObject *â
mympimodule.c:1641: warning: return from incompatible pointer type
mympimodule.c: In function âmpi_win_shared_queryâ:
mympimodule.c:1672: warning: assignment from incompatible pointer type
mympimodule.c:1673: warning: passing argument 1 of âPyTuple_SetItemâ from incompatible pointer type
/usr/include/python2.6/tupleobject.h:43: note: expected âstruct PyObject *â but argument is of type âstruct PyArrayObject *â
mympimodule.c:1674: warning: return from incompatible pointer type
mympimodule.c: In function âmpi_iterefaâ:
mympimodule.c:2017: error: âfftw_planâ undeclared (first use in this function)
mympimodule.c:2017: error: (Each undeclared identifier is reported only once
mympimodule.c:2017: error: for each function it appears in.)
mympimodule.c:2017: error: expected â;â before âplanâ
mympimodule.c:2046: warning: initialization makes pointer from integer without a cast
mympimodule.c:2051: error: expected â;â before âplan_real_to_complexâ
mympimodule.c:2052: error: expected â;â before âplan_complex_to_realâ
mympimodule.c:2100: warning: assignment makes pointer from integer without a cast
mympimodule.c:2145: error: âplan_complex_to_realâ undeclared (first use in this function)
mympimodule.c:2162: error: âplan_real_to_complexâ undeclared (first use in this function)
mympimodule.c:2301: warning: assignment from incompatible pointer type
mympimodule.c:2302: warning: passing argument 1 of âPyTuple_SetItemâ from incompatible pointer type
/usr/include/python2.6/tupleobject.h:43: note: expected âstruct PyObject *â but argument is of type âstruct PyArrayObject *â
mympimodule.c:2303: warning: return from incompatible pointer type
mympimodule.c: In function âinitmpiâ:
mympimodule.c:2535: warning: âompi_mpi_ubâ is deprecated (declared at /usr/include/openmpi-1.10-x86_64/mpi.h:914)
mympimodule.c:2537: warning: âompi_mpi_lbâ is deprecated (declared at /usr/include/openmpi-1.10-x86_64/mpi.h:913)
make[1]: *** [mympimodule.o] Error 1
make[1]: Leaving directory `/opt/pydusa-1.15es-fftmpi-6/src'
make: *** [all-recursive] Error 1

I'm guessing the issue is this line:

mympimodule.c:99:23: error: fftw3-mpi.h: No such file or directory

src/mympimodule.c contains:

#include "fftw3-mpi.h" 

This file is actually here: ./fftw_mpi/fftw-3.3.5/mpi/fftw3-mpi.h

Can anyone suggest what might be the issue here?

Thanks,

Mark.


Replies (6)

RE: Issue compiling pydusa for Appion manual install on RHEL 6.7 - Added by Sargis Dallakyan about 8 years ago

Hi Mark,

I've installed pydusa yesterday and was facing a similar problem. fftw3-mpi.h seems to be new in fftw-3.3.5 and isn't available in Centos 6 fftw-devel. I followed install instructions at http://blake.bcm.edu/emanwiki/EMAN2/Parallel/PyDusa and it worked. ./install_mpi.py compiles fftw shipped with pydusa and uses it so there is no problem of version mismatch.

Hope this helps,
Sargis

RE: Issue compiling pydusa for Appion manual install on RHEL 6.7 - Added by Mark Watts about 8 years ago

http://blake.bcm.edu/emanwiki/EMAN2/Parallel/PyDusa suggests you run the following:

tar -xvzf pydusa-1.15es.tgz
cd pydusa-1.15es
./install_mpi.py

Problem is, that's not the name of the tar file they offer for download on http://ncmi.bcm.edu/ncmi/software/counter_222/software_121
That page has: pydusa-1.15es-fftmpi-6__2016_09_07.tgz

If I use this one, I see:

[root@srv01746 pydusa-1.15es-fftmpi-6]# ./install_mpi.py
  File "./install_mpi.py", line 124
    with open("Makefile", "r") as fp, open("Makefile___out", "w") as fp_out:
                                    ^
SyntaxError: invalid syntax

Which tar file did you use, and where did you get it from?

Thanks,

Mark.

RE: Issue compiling pydusa for Appion manual install on RHEL 6.7 - Added by Sargis Dallakyan about 8 years ago

I've used pydusa-1.15es-fftmpi-6 too, linked from EMAN2 download page. I run ../Python/bin/python install_mpi.py to use Python version shipped with EMAN2.

RE: Issue compiling pydusa for Appion manual install on RHEL 6.7 - Added by Mark Watts about 8 years ago

I take it that following http://emg.nysbc.org/projects/appion/wiki/Install_EMAN2 you used the binary version then?

If I follow those steps for the source version, I don't get a python binary.
However, if I use the specific binary version the install docs mention, I do get a python binary and it does seem to compile/install ok.

Thanks,

Mark.

RE: Issue compiling pydusa for Appion manual install on RHEL 6.7 - Added by Mark Watts about 8 years ago

So, I've unpacked the binary version of eman2.
I've run the following:

cd pydusa-1.15es-fftmpi-6
module add openmpi-1.10-x86_64
../EMAN2/Python/bin/python install_mpi.py

This seems to have worked although I'm not entirely sure how to make sure that this version of python is used permanently for this.

I've then followed the steps from "copy the mpi.so to site-packages with a different name" down to the tests.

Test 1 appears to work.
Test 2 fails:

[root@srv01746 pydusa-1.15es-fftmpi-6]# /opt/EMAN2/Python/bin/python -c 'import sys; from mpi import mpi_init; mpi_init(len(sys.argv), sys.argv)'
[srv01746:102037] mca: base: component_find: unable to open /usr/lib64/openmpi-1.10/lib/openmpi/mca_shmem_sysv: /usr/lib64/openmpi-1.10/lib/openmpi/mca_shmem_sysv.so: undefined symbol: opal_show_help (ignored)
[srv01746:102037] mca: base: component_find: unable to open /usr/lib64/openmpi-1.10/lib/openmpi/mca_shmem_posix: /usr/lib64/openmpi-1.10/lib/openmpi/mca_shmem_posix.so: undefined symbol: opal_show_help (ignored)
[srv01746:102037] mca: base: component_find: unable to open /usr/lib64/openmpi-1.10/lib/openmpi/mca_shmem_mmap: /usr/lib64/openmpi-1.10/lib/openmpi/mca_shmem_mmap.so: undefined symbol: opal_show_help (ignored)
--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  --> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_init failed
  --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[srv01746:102037] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!

I have no idea what this means, nor how to fix it.

I'm guessing that the http://emg.nysbc.org/projects/appion/wiki/Install_EMAN2 documentation page needs a thorough updating although I do note the comment at the bottom:

"This module was very difficult to get working, it seems to be a poorly supported python wrapper for MPI. So, what we are going to do is compile the module, rename it, and create a wrapper. So, essentially we are creating a wrapper around the wrapper. We can only hope they switch to [http://mpi4py.scipy.org/ mpi4py] in the future."

Mark.

RE: Issue compiling pydusa for Appion manual install on RHEL 6.7 - Added by Sargis Dallakyan about 8 years ago

Mark Watts wrote:

I take it that following http://emg.nysbc.org/projects/appion/wiki/Install_EMAN2 you used the binary version then?

That's right, I've used the latest binary version that comes with Python 2.7. EMAN2/Python/bin/python install_mpi.py worked for me and outputted the following at the end:

=====> To complete installation you need to run:
source /gpfs/sw/eman/2.12/eman2.bashrc

Then start SPARX or EMAN2 and run 'import mpi'. If it runs without any error message the installation is complete.

I did source 2.12/eman2.bashrc and 'import mpi' worked for me.

    (1-6/6)