![]() |
The ROme OpTimistic Simulator
2.0.0
A General-Purpose Multithreaded Parallel/Distributed Simulation Platform
|
MPI Support Module. More...
#include <stdbool.h>
#include <mpi.h>
#include <core/core.h>
#include <communication/wnd.h>
#include <statistics/statistics.h>
Go to the source code of this file.
Macros | |
#define | lock_mpi() {if(!mpi_support_multithread) spin_lock(&mpi_lock);} |
This macro takes a global lock if multithread support is not available from MPI. | |
#define | unlock_mpi() {if(!mpi_support_multithread) spin_unlock(&mpi_lock);} |
This macro releases a global lock if multithreaded support is not available from MPI. | |
Functions | |
void | mpi_init (int *argc, char ***argv) |
Initialize MPI subsystem. More... | |
void | inter_kernel_comm_init (void) |
Initialize inter-kernel communication. More... | |
void | inter_kernel_comm_finalize (void) |
Finalize inter-kernel communication. More... | |
void | mpi_finalize (void) |
Finalize MPI. More... | |
void | syncronize_all (void) |
Syncronize all the kernels. More... | |
void | send_remote_msg (msg_t *msg) |
Send a message to a remote LP. More... | |
bool | pending_msgs (int tag) |
Check if there are pending messages. More... | |
void | receive_remote_msgs (void) |
Receive remote messages. More... | |
bool | is_request_completed (MPI_Request *) |
check if an MPI request has been completed More... | |
bool | all_kernels_terminated (void) |
Check if all kernels have reached the termination condition. More... | |
void | broadcast_termination (void) |
Notify all the kernels about local termination. More... | |
void | collect_termination (void) |
Check if other kernels have reached the termination condition. More... | |
void | mpi_reduce_statistics (struct stat_t *, struct stat_t *) |
Invoke statistics reduction. More... | |
Variables | |
bool | mpi_support_multithread |
Flag telling whether the MPI runtime supports multithreading. | |
spinlock_t | mpi_lock |
MPI Support Module.
MPI Support Module
This file is part of ROOT-Sim (ROme OpTimistic Simulator).
ROOT-Sim is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; only version 3 of the License applies.
ROOT-Sim is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with ROOT-Sim; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Definition in file mpi.h.
bool all_kernels_terminated | ( | void | ) |
Check if all kernels have reached the termination condition.
This function checks whether all threads have been informed of the fact that the simulation should be halted, and they have taken proper actions to terminate. Once this function confirms this condition, the process can safely exit.
true
if all the kernel have reached the termination condition void broadcast_termination | ( | void | ) |
Notify all the kernels about local termination.
This function is used to inform all other simulation kernel instances that this kernel is ready to terminate the simulation.
void collect_termination | ( | void | ) |
Check if other kernels have reached the termination condition.
This function accumulates termination acknoledgements from remote kernels, and updates the terminated counter.
Definition at line 289 of file mpi.c.
void inter_kernel_comm_finalize | ( | void | ) |
void inter_kernel_comm_init | ( | void | ) |
bool is_request_completed | ( | MPI_Request * | req | ) |
check if an MPI request has been completed
This function checks whether the operation associated with the specified MPI Request has been completed or not.
req | A pointer to the MPI_Request to check for completion |
true
if the operation associated with req
is complete, false
otherwise. Definition at line 144 of file mpi.c.
void mpi_finalize | ( | void | ) |
void mpi_init | ( | int * | argc, |
char *** | argv | ||
) |
Initialize MPI subsystem.
This is mainly a wrapper of MPI_Init, which contains some boilerplate code to initialize datastructures.
Most notably, here we determine if the library which we are using has suitable multithreading support, and we setup the MPI Communicator which will be used later on to exhange model-specific messages.
Definition at line 514 of file mpi.c.
Invoke statistics reduction.
This function is a simple wrapper of an MPI_Reduce operation, which uses the custom reduce operation implemented in reduce_stat_vector() to gather reduced statistics in the master kernel (rank 0).
global | A pointer to a struct stat_t where reduced statistics will be stored. The reduction only takes place at rank 0, therefore other simulation kernel instances will never read actual meaningful information in that structure. |
local | A pointer to a local struct stat_t which is used as the source of information for the distributed reduction operation. |
Definition at line 428 of file mpi.c.
bool pending_msgs | ( | int | tag | ) |
Check if there are pending messages.
This function tells whether there is a pending message in the underlying MPI library coming from any remote simulation kernel instance. If passing a tag different from MPI_ANY_TAG to this function, a specific tag can be extracted.
Messages are only extracted from MPI_COMM_WORLD communicator. This is therefore only useful in startup/shutdown operations (this is used indeed to initiate GVT and conclude the distributed simulation shutdown).
tag | The tag of the messages to check for availability. |
true
if a pending message tagged with tag
is found, false
otherwise. Definition at line 122 of file mpi.c.
void receive_remote_msgs | ( | void | ) |
Receive remote messages.
This function extracts from MPI events destined to locally-hosted LPs. Only messages to LP can be extracted here, because the probing is done towards the msg_comm communicator.
A message which is extracted here is placed (out of order) in the bottom half of the destination LP, for later insertion (in order) in the input queue.
This function will try to extract as many messages as possible from the underlying MPI library. In particular, once this function is called, it will return only after that no message can be found in the MPI library, destined to this simulation kernel instance.
Currently, this function is called once per main loop iteration. Doing more calls might significantly imbalance the workload of some worker thread.
Definition at line 208 of file mpi.c.
void send_remote_msg | ( | msg_t * | msg | ) |
Send a message to a remote LP.
This function takes in charge an event to be delivered to a remote LP. The sending operation is non-blocking: to this end, the message is registered into the outgoing queue of the destination kernel, in order to allow MPI to keep track of the sending operation.
Also, the message being sent is registered at the sender thread, to keep track of the white/red message information which is necessary to correctly reduce the GVT value.
msg | A pointer to the msg_t keeping the message to be sent remotely |
Definition at line 169 of file mpi.c.
void syncronize_all | ( | void | ) |
Syncronize all the kernels.
This function can be used as syncronization barrier between all the threads of all the kernels.
The function will return only after all the threads on all the kernels have already entered this function.
We create a new communicator here, to be sure that we synchronize exactly in this function and not somewhere else.
Definition at line 492 of file mpi.c.
spinlock_t mpi_lock |