mpirun dosent работа с openFoam (внезапно) [закрыто]

мой mpirun на Ubuntu 20.04 dosent work больше.

Я использовал его несколько недель назад, и он работает хорошо и хорошо. Я просто хотел использовать его несколько минут назад, и он действительно работает.

Сначала я разлагаю регистр, а затем использую команду

mpirun -np 4 interFoam -parallel

..

Затем появится это сообщение.

kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$ mpirun -np 4 interFoam -parallel

It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)

It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)

It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)

It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "(null)" (-43) instead of "Success" (0)

*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:3304] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:3305] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:3306] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[Kai-Desktop:3307] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.

mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[48186,1],0]
  Exit code:    1

 kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$

Версия openMPI такая:

kai@Kai-Desktop:~/Dokumente$ mpirun --version
mpirun (Open MPI) 4.0.3

Я создал тестовый файл *.c до сих пор:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
    // Initialize the MPI environment
    MPI_Init(NULL, NULL);

    // Get the number of processes
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    // Get the rank of the process
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

    // Get the name of the processor
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int name_len;
    MPI_Get_processor_name(processor_name, &name_len);

    // Print off a hello world message
    printf("Hello world from processor %s, rank %d out of %d processors\n",
           processor_name, world_rank, world_size);

    // Finalize the MPI environment.
    MPI_Finalize();
 }

Если я скомпилирую и выполню его, это будет ответ:

kai@Kai-Desktop:~/Dokumente$ mpirun -np 4 ./hello_world -parallel
Hello world from processor Kai-Desktop, rank 0 out of 4 processors
Hello world from processor Kai-Desktop, rank 1 out of 4 processors
Hello world from processor Kai-Desktop, rank 2 out of 4 processors
Hello world from processor Kai-Desktop, rank 3 out of 4 processors

Кто-нибудь знает, что делать, чтобы избавиться от этой ошибки? Если вам нужна дополнительная информация, пожалуйста, напишите, я дам ее вам. Я не знаю, что в системе что-то изменилось, связанное с mpi. Я делаю обновление с 18.04 до 20.04, но я не знаю, приводит ли это к этой ошибке.

Всего хорошего Кай

0
задан 28 December 2020 в 18:41

1 ответ

Я обновил OF v7 до OF v8, и он снова работает..

до сих пор не знаю почему...

0
ответ дан 29 December 2020 в 15:28

Другие вопросы по тегам:

Похожие вопросы: