Click here to Skip to main content
15,887,596 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
In continuation of the previous topic: I don't understand how to fix the MPI code for the job requirements

Task: Compose a program using blocking and non-blocking operations according to the variant. Ensure that operations are executed in several processes. The distribution of initial data must be performed using non-blocking operations, and the collection of results must be performed using blocking operations.
b=min(A+C)

Only one if-expression if(rank==0) should be used to separate processes. In addition, neither Scatter nor Reduce should be used.

The teacher asked me what command for receiving data should work in parallel with MPI_Isend, I answered MPI_Recv (because according to the assignment we send with non-blocking and receive with blocking), but he said that it is wrong. And he said that always the same error in the code due to misunderstanding of the task - there should be a collection at 0 rank with the help of blocking should be.

UPD: Moved MPI_Waitall, not sure if it is correct, I declare MPI_Request in rank 0 only

#define N 13

int main(int argc, char* argv[])
{
    int rank, size;
    double* A = 0, * C = 0, localMin = DBL_MAX;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    int pSize = N / size;     
    int remainder = N % size;

    if (rank < remainder) {
        pSize++;
    }

    cout << "Process " << rank << ", pSize = " << pSize << endl;

    if (rank == 0) {
        srand((unsigned)time(0));
        A = new double[N];
        C = new double[N];
        for (int i = 0; i < N; i++) {
            A[i] = (rand() % 20) / 2.;
            C[i] = (rand() % 20) / 2.;
            cout << i << ". sum: " << A[i] + C[i] << endl;
        }

        MPI_Request* requestA = new MPI_Request[size - 1];
        MPI_Request* requestC = new MPI_Request[size - 1];

        int offset = pSize;
        for (int i = 1; i < size; i++) {
            int send_count;
            if (remainder == 0 || i < remainder) {
                send_count = pSize;
            }
            else {
                send_count = pSize - 1;
            }
            MPI_Isend(A + offset, send_count, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &requestA[i - 1]);
            MPI_Isend(C + offset, send_count, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &requestC[i - 1]);
            offset += send_count;
        }

        for (int i = 0; i < pSize; i++) {
            double temp = A[i] + C[i];
            localMin = min(localMin, temp);
        }

        MPI_Waitall(size - 1, requestA, MPI_STATUSES_IGNORE);
        MPI_Waitall(size - 1, requestC, MPI_STATUSES_IGNORE);

        double globalMin = localMin;
        for (int i = 1; i < size; i++) {
            double poluchMin;
            MPI_Recv(&poluchMin, 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            globalMin = min(globalMin, poluchMin);
        }
        cout << "Minimum min(A+C) = " << globalMin << endl;
        delete[] requestA;
        delete[] requestC;

    }
    else {
        A = new double[pSize];
        C = new double[pSize];
        MPI_Recv(A, pSize, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        MPI_Recv(C, pSize, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

        for (int i = 0; i < pSize; i++) {
            double temp = A[i] + C[i];
            localMin = min(localMin, temp);
        }

        MPI_Send(&localMin, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
    }
    delete[] A;
    delete[] C;

    MPI_Finalize();
}


What I have tried:

I have reworked the code many times with the help of merano99, thank you very much for your help, I couldn't have done it myself
Posted
Updated 16-Nov-23 19:57pm
v5

1 solution

Quote:
there should be a collection at 0 rank with the help of blocking should be.

Unfortunately, despite several revisions, important details are still missing.
It is obvious that you are still blocking unnecessarily in the wrong place. I assume that this point clearly shows the teacher that you have not understood it. I have already pointed out several times that process 0 should do its work in parallel with all other processes.

Note: You forgot to mention that only one if-statement if(rank==0) should be used to split the processes. In addition, neither Scatter nor Reduce should be used.
 
Share this answer
 
v2
Comments
w4de 16-Nov-23 11:23am    
I still don't fully understand what they want me to do.
merano99 16-Nov-23 12:01pm    
It would be necessary to think about the location of MPI_Waitall. With some certainty exactly the point that the teacher meant. I would also declare and use the variables requestA and requestC only for process 0, as no other process needs them.
w4de 17-Nov-23 1:57am    
Hello, updated the code in the topic, have a look, not sure if it's correct there
merano99 17-Nov-23 3:44am    
Looks OK now. Now answer the teacher's question correctly and it's finally finished. Comments in the source code would of course be helpful.
w4de 17-Nov-23 3:46am    
"Now answer the teacher's question correctly and it's finally finished" - i didn't quite get that. MPI_Isend should run in parallel?

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900