Click here to Skip to main content
15,887,175 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I'm using OpenMP to parallelise a couple of for() loops but I'm not sure which "#pragma" command is the right one to use. These situations that I've faced which hopefully possible to parallelise. Really need your help in this.

Situation 1 - Is it possible to use "#pragma omp parallel for" here? Because I'm afraid of whether would it affect in returning values in parallel.
C++
Triangle Triangle::t_ID_lookup(Triangle a[], int ID, int n)
{
    Triangle res;
    for(int i=0; i<n; i++)
    {
        if(ID==a[i].t_ID)
            return (res=a[i]);
    }
    return res;
}

Situation 2 - I've tried "#pragma omp parallel for" here but it crashes as I run it. I believe we are not allowed to push_back() values in parallel.
C++
vector<Triangle> Triangle::special(Triangle a[], int n)
{
    vector<Triangle> spec;
    for(int i=0; i<n; i++)
    {
        if(a[i].diagonal > 27.0)
            spec.push_back(a[i]);
    }
    return spec;
}
Posted
Updated 19-Apr-15 9:23am
v4
Comments
[no name] 20-Apr-15 21:54pm    
No, push_back is not reentrant.

Situation 1 could be a problem if there are multiple a[i] with the same ID (concurrent assignment of Triangle).
The_Inventor 30-Apr-15 1:04am    
I presume you are using the 'for loop' to assign the 'jobs', with one job each processor, you first need to get the number of available processors, pre-established prior to calling the function and passing the value into the function as the dummy 'n'. #pragma statements are better used in a header and 'pushed or popped' with in the header.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900