A Discrete-Event Network Simulator
API
simple-distributed-mpi-comm.cc File Reference

This test is equivalent to simple-distributed with the addition of initialization of MPI by user code (this script) and providing a communicator to ns-3. More...

#include "mpi-test-fixtures.h"
#include "ns3/core-module.h"
#include "ns3/internet-stack-helper.h"
#include "ns3/ipv4-address-helper.h"
#include "ns3/ipv4-global-routing-helper.h"
#include "ns3/ipv4-list-routing-helper.h"
#include "ns3/ipv4-static-routing-helper.h"
#include "ns3/mpi-interface.h"
#include "ns3/network-module.h"
#include "ns3/nix-vector-helper.h"
#include "ns3/on-off-helper.h"
#include "ns3/packet-sink-helper.h"
#include "ns3/packet-sink.h"
#include "ns3/point-to-point-helper.h"
#include <mpi.h>
+ Include dependency graph for simple-distributed-mpi-comm.cc:

Go to the source code of this file.

Functions

void ReportRank (int color, MPI_Comm splitComm)
 Report my rank, in both MPI_COMM_WORLD and the split communicator. More...
 

Variables

const int NOT_NS_COLOR = NS_COLOR + 1
 Tag for whether this rank should go into a new communicator ns-3 ranks will have color == 1. More...
 
const int NS_COLOR = 1
 Tag for whether this rank should go into a new communicator ns-3 ranks will have color == 1. More...
 

Detailed Description

This test is equivalent to simple-distributed with the addition of initialization of MPI by user code (this script) and providing a communicator to ns-3.

The ns-3 communicator is smaller than MPI Comm World as might be the case if ns-3 is run in parallel with another simulator.

TestDistributed creates a dumbbell topology and logically splits it in half. The left half is placed on logical processor 0 and the right half is placed on logical processor 1.

            -------   -------
             RANK 0    RANK 1
            ------- | -------
                    |

n0 ------—| | |-------— n6 | | | n1 ----—\ | | | /----— n7 n4 -------—|-------— n5 n2 ----—/ | | | ----— n8 | | | n3 ------—| | |-------— n9

OnOff clients are placed on each left leaf node. Each right leaf node is a packet sink for a left leaf node. As a packet travels from one logical processor to another (the link between n4 and n5), MPI messages are passed containing the serialized packet. The message is then deserialized into a new packet and sent on as normal.

One packet is sent from each left leaf node. The packet sinks on the right leaf nodes output logging information when they receive the packet.

Definition in file simple-distributed-mpi-comm.cc.

Function Documentation

◆ ReportRank()

void ReportRank ( int  color,
MPI_Comm  splitComm 
)

Report my rank, in both MPI_COMM_WORLD and the split communicator.

Parameters
[in]colorMy role, either ns-3 rank or other rank.
[in]splitCommThe split communicator.

Definition at line 96 of file simple-distributed-mpi-comm.cc.

References ns3::SinkTracer::GetWorldRank(), ns3::SinkTracer::GetWorldSize(), NS_COLOR, RANK0COUT, and RANK0COUTAPPEND.

+ Here is the call graph for this function:

Variable Documentation

◆ NOT_NS_COLOR

const int NOT_NS_COLOR = NS_COLOR + 1

Tag for whether this rank should go into a new communicator ns-3 ranks will have color == 1.

Definition at line 85 of file simple-distributed-mpi-comm.cc.

◆ NS_COLOR

const int NS_COLOR = 1

Tag for whether this rank should go into a new communicator ns-3 ranks will have color == 1.

Definition at line 84 of file simple-distributed-mpi-comm.cc.

Referenced by ReportRank().