2017-01-04 4 views
1

私はUbuntu Desktop LTS 16.04.1を持っており、以下のことを行っています。私が使用してmpi4pyインストール、その後Ubuntu mpi4pyはコンパイルされません

tar -xvf openmpi-2.0.1.tar.gz 
cd openmpi-2.0.1 
./configure --prefix="/home/$USER/.openmpi" 
make 
sudo make install 
export PATH="$PATH:/home/$USER/.openmpi/bin" 
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/.openmpi/lib/" 
echo export PATH="$PATH:/home/$USER/.openmpi/bin" >> /home/$USER/.bashrc 
echo export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/.openmpi/lib/" >> /home/$USER/.bashrc 

:私は、https://www.open-mpi.org/software/ompi/v2.0/に行ってきましたOpenMPIの-2.0.1.tar.gzをダウンロードして、以下のコマンドを使用してそれをインストールした後、その

sudo apt-get update 
sudo apt-get install python-mpi4py 

を、私は(私のPythonのバージョンは、Python 2.7.12である)私はmpiexecの-n 5のpython something.pyを使用して、それを実行しようとするとsomething.py次の行

from mpi4py import MPI 
comm = MPI.COMM_WORLD 
rank = comm.Get_rank() 
print "hello world from process ", rank 

を持つという名前の、私が手Pythonのファイルを作成しました次のソリューションの

------------------------------------------------------- 
Primary job terminated normally, but 1 process returned 
a non-zero exit code.. Per user-direction, the job has been aborted. 
------------------------------------------------------- 
-------------------------------------------------------------------------- 
It looks like orte_init failed for some reason; your parallel process is 
likely to abort. There are many reasons that a parallel process can 
fail during orte_init; some of which are due to configuration or 
environment problems. This failure appears to be an internal failure; 
here's some additional information (which may only be relevant to an 
Open MPI developer): 

    opal_init failed 
    --> Returned value Error (-1) instead of ORTE_SUCCESS 
-------------------------------------------------------------------------- 
*** An error occurred in MPI_Init_thread 
*** on a NULL communicator 
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 
*** and potentially your MPI job) 
[vlad-VirtualBox:3551] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! 
[vlad-VirtualBox:03551] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap: /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap.so: undefined symbol: opal_show_help (ignored) 
[vlad-VirtualBox:03551] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv: /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv.so: undefined symbol: opal_show_help (ignored) 
[vlad-VirtualBox:03551] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_posix: /usr/lib/openmpi/lib/openmpi/mca_shmem_posix.so: undefined symbol: opal_shmem_base_framework (ignored) 
-------------------------------------------------------------------------- 
It looks like opal_init failed for some reason; your parallel process is 
likely to abort. There are many reasons that a parallel process can 
fail during opal_init; some of which are due to configuration or 
environment problems. This failure appears to be an internal failure; 
here's some additional information (which may only be relevant to an 
Open MPI developer): 

    opal_shmem_base_select failed 
    --> Returned value -1 instead of OPAL_SUCCESS 
-------------------------------------------------------------------------- 
-------------------------------------------------------------------------- 
It looks like MPI_INIT failed for some reason; your parallel process is 
likely to abort. There are many reasons that a parallel process can 
fail during MPI_INIT; some of which are due to configuration or environment 
problems. This failure appears to be an internal failure; here's some 
additional information (which may only be relevant to an Open MPI 
developer): 

    ompi_mpi_init: ompi_rte_init failed 
    --> Returned "Error" (-1) instead of "Success" (0) 
-------------------------------------------------------------------------- 
-------------------------------------------------------------------------- 
It looks like orte_init failed for some reason; your parallel process is 
likely to abort. There are many reasons that a parallel process can 
fail during orte_init; some of which are due to configuration or 
environment problems. This failure appears to be an internal failure; 
here's some additional information (which may only be relevant to an 
Open MPI developer): 

    opal_init failed 
    --> Returned value Error (-1) instead of ORTE_SUCCESS 
-------------------------------------------------------------------------- 
*** An error occurred in MPI_Init_thread 
*** on a NULL communicator 
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, 
*** and potentially your MPI job) 
[vlad-VirtualBox:3552] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! 
[vlad-VirtualBox:03552] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap: /usr/lib/openmpi/lib/openmpi/mca_shmem_mmap.so: undefined symbol: opal_show_help (ignored) 
[vlad-VirtualBox:03552] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv: /usr/lib/openmpi/lib/openmpi/mca_shmem_sysv.so: undefined symbol: opal_show_help (ignored) 
[vlad-VirtualBox:03552] mca: base: component_find: unable to open /usr/lib/openmpi/lib/openmpi/mca_shmem_posix: /usr/lib/openmpi/lib/openmpi/mca_shmem_posix.so: undefined symbol: opal_shmem_base_framework (ignored) 
-------------------------------------------------------------------------- 
It looks like opal_init failed for some reason; your parallel process is 
likely to abort. There are many reasons that a parallel process can 
fail during opal_init; some of which are due to configuration or 
environment problems. This failure appears to be an internal failure; 
here's some additional information (which may only be relevant to an 
Open MPI developer): 

    opal_shmem_base_select failed 
    --> Returned value -1 instead of OPAL_SUCCESS 
-------------------------------------------------------------------------- 
-------------------------------------------------------------------------- 
It looks like MPI_INIT failed for some reason; your parallel process is 
likely to abort. There are many reasons that a parallel process can 
fail during MPI_INIT; some of which are due to configuration or environment 
problems. This failure appears to be an internal failure; here's some 
additional information (which may only be relevant to an Open MPI 
developer): 

    ompi_mpi_init: ompi_rte_init failed 
    --> Returned "Error" (-1) instead of "Success" (0) 
-------------------------------------------------------------------------- 

大半は、私がこれ、ソースから直接OpenMPIのインストールすると言うが、それは私の問題を解決していないようでした。誰もが同様の問題を抱えていますか?

答えて

1

私はmpi4pyをアンインストールしました。私はpython-pipをインストールしました.pipによってmpi4pyをsudoでインストールしました。 comandsのリスト:

sudo apt-get remove python-mpi4py 
sudo apt-get update && sudo apt-get -y upgrade 
sudo apt-get install python-pip 
sudo pip install mpi4py 

注:つもりはapt-getを& &須藤はapt-getを-yアップグレードを更新sudoの後に少し待たなければなりません。この質問は今閉じられることができます、ありがとうございます。

関連する問題