Click here to Skip to main content
15,887,683 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Help to fix the code, the teacher says to read the assignment carefully and does not say what exactly is the error, and I cannot understand it...

Task: Compose a program using blocking and non-blocking operations according to the variant. Ensure that operations are executed in several processes. The distribution of initial data must be performed using non-blocking operations, and the collection of results must be performed using blocking operations.
C++
b=min(A+C)

Fixed the code many times, but it still says it's wrong.
C++
#include <iostream>
#include <time.h>
#include <mpi.h>
#include <float.h>
using namespace std;
#define N 20

void PrintVector(double V[N])
{
for (int i = 0; i < N; i++)
{
    printf("%.2f ", V[i]);
}
printf("\n");
}

int main(int argc, char* argv[])
{
MPI_Init(&argc, &argv);
MPI_Status st;
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
double* A, * C;

A = new double[N];
C = new double[N];
double localMin = DBL_MAX;
MPI_Request request[3];

if (!rank)
{
    srand((unsigned)time(0));
    for (int i = 0; i < N; i++) {
        A[i] = (rand() % 20) / 2.;
        C[i] = (rand() % 20) / 2.;
    }

    unsigned pSize = N / (size - 1);
    printf("Master started!\n");
    for (int i = 1; i < size; i++)
    {
        MPI_Isend(&pSize, 1, MPI_UNSIGNED, i, 0, 
                  MPI_COMM_WORLD, &request[0]);
        MPI_Isend(A + (i - 1) * pSize, pSize, 
                  MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &request[1]);
        MPI_Isend(C + (i - 1) * pSize, pSize, 
                  MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &request[2]);
        cout << "Master sent " << pSize << endl;
    }

    printf("Vector A:");
    PrintVector(A);
    printf("Vector C:");
    PrintVector(C);

    MPI_Waitall(3, request, MPI_STATUSES_IGNORE);

    for (int i = (size - 1) * pSize; i < N; i++) {
        double temp = A[i] + C[i];
        localMin = min(localMin, temp);
    }

    for (int i = 1; i < size; i++)
    {
        double receivedMin;
        MPI_Recv(&receivedMin, 1, MPI_DOUBLE, i, 
                 MPI_ANY_TAG, MPI_COMM_WORLD, &st);
        localMin = min(localMin, receivedMin);
    }
    printf("\nMinumum b = min(A+C) = %f", localMin);
}
else {
    double* dataA, * dataC;
    unsigned count;
    MPI_Recv(&count, 1, MPI_UNSIGNED, 0, MPI_ANY_TAG, 
             MPI_COMM_WORLD, &st);
    dataA = new double[count];
    dataC = new double[count];
    MPI_Recv(dataA, count, MPI_DOUBLE, 0, MPI_ANY_TAG, 
             MPI_COMM_WORLD, &st);
    MPI_Recv(dataC, count, MPI_DOUBLE, 0, MPI_ANY_TAG, 
             MPI_COMM_WORLD, &st);
    cout << "Rank " << rank << " got " << count << endl;
    for (unsigned i = 0; i < count; i++)
    {
        double temp = dataA[i] + dataC[i];
        localMin = min(localMin, temp);
    }

    MPI_Send(&localMin, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);

    delete[] dataA;
    delete[] dataC;
}
delete[] A;
delete[] C;
MPI_Finalize();
}


What I have tried:

I've redone the code many times and I don't understand what's wrong with it
Posted
Updated 27-Oct-23 9:16am
v2

1 solution

The requirement for non-blocking operations when distributing is met by MPI_Isend and MPI_Irecv.
However, a possible remainder is not considered when distributing the elements.

C++
unsigned pSize = N / size; // Number of elements per process
unsigned remainder = N % size; // Remaining elements to be distributed

// Check whether the current process should get more elements
if (rank < remainder) {
     pSize++;
}

edit:
The length of the data to be sent does not seem necessary. Instead of sending the result data and receiving it in process 0 in a loop with MPI_Recv rather MPI_Reduce would make sense.

All processes calculate a partial result, it would be therefore usual not to hold the formula twice in the code. Also the initialization of the variables should not be done in separate if else branches.

Example:
C++
srand((unsigned)time(0));

if (rank == 0) {
        for (int i = 0; i < N; i++) {
            A[i] = (rand() % 20) / 2.;
            C[i] = (rand() % 20) / 2.;
        }
}

unsigned pSize = N / size;      
unsigned remainder = N % size;  
if (rank < remainder) {
  pSize++;
}

// Distribution of the data in parts
if (rank == 0) {
     // Send a range of data with MPI_ISend (non blocking)
    }
    else
    {
     // receive a range of data with MPI_Recv (blocking)
    }

// Calculation with all processes (without else)
for (int i = 0; i < pSize; i++) {
        double temp = A[i] + C[i];
        localMin = min(localMin, temp);
 }

// Collecting the data with MPI_Reduce (blocking)

I'm sure you would find more points, but I won't comment further. Just a note that I see no need for MPI_Wait or MPI_Waitall.
 
Share this answer
 
v4
Comments
w4de 28-Oct-23 6:37am    
I only need to use two-point operations, so MPI_Reduce is not suitable
merano99 28-Oct-23 7:56am    
In the task description no restrictions of the MPI functions are required. Instead of MPI_Reduce, MPI_Send can be used again at this point if necessary. This does not change the basic proposed structure of my proposal. It would only change the comment suggestion in the last line. My suggestion would significantly improve the above program and also shorten it greatly.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900