Click here to Skip to main content
15,867,686 members
Articles / General Programming / Algorithms

A Particle Swarm Optimizer In C#

Rate me:
Please Sign up or sign in to vote.
5.00/5 (17 votes)
6 Sep 2016CPOL5 min read 29K   1.3K   16   3
An artificial life algorithm that attempts to solve a problem by flying a swarm of entities through a range of possible solutions where each entity is guided by the performance of other members of the swarm

Introduction

The particle swarm optimizer (PSO) is a problem solving algorithm that was first proposed in 1995 by Jim Kennedy and Russ Eberhart. It was used initially for modelling the movement of birds in a flock. The basic idea is to move (fly) a group (swarm) of problem solving entities (particles) throughout the range of possible solutions to a problem. This range is known as the problem space. The movement of particles within the problem space has a random component but is mainly guided by three factors.

  1. The present position of the particle
  2. The best position found by the particle (known as personal best or pBest)
  3. The best position found in the swarm (known a global best or gBest)

Particle Swarm Optimization Step By Step

Say, for example, that the problem was to find the minimal values of X and Y for the equation (X*X)-(Y*Y) where X and Y are integers in the range 0 to 10. The alogrithm will follow the following execution path.

  1. Initialize the particles with random values of X and Y in the range 0-10.
  2. Determine the fitness of the particle by evaluating the equation with the present values of X and Y.
  3. Update each particle's position based on its personal best and the global best fitness values.
  4. Either, terminate on a preset fitness value or the number of iterations of steps 2 and 3, or else repeat steps 2 and 3.

Rosenbrock Function

The Rosenbrock function is often used as a test for the optimizer. It's difficult to find the minimal value as it's multidimensional with lots of local minimal values. It's useful to bear in mind, when selecting suitable functions to test, that the otimizer has a built-in bias. It's easier to find solutions that lie in the centre of the problem space or on the edges of it.

Image 1

The Key Optimization Formula in Easy Stages

Stage 1

Update the velocity of the particle for each dimension, in the example, it would be for both X and Y. The velocity is the amount by which the position changes.

C#
vid=vid*W+C1*rand(pid-xid)+C2*Rand(pgd-xid)

where:

  • vid is the current velocity,
  • W, C1, C2 are constants. The approximate values for the constants are C1=C2=1.4 W=0.7
  • Rand and rand are two randomly generated doubles >=0 and <1
  • xid is the current position, pid is the personal best position and pgd is the global best position.

Stage 2

Update the current position by adding the new velocity to it.

C#
xid=xid+vid

That's it. There isn't any mutation or generation of new particles. It is simply a semi-random search process, directed through information exchange between members of the swarm.

Recent Refinements

Modern variations of the algorithm use a local best position rather than a global best. This tends to ensure better exploration of the problem space and prevents too rapid a convergence to some regional minimal value. Each particle has a fixed overlapping collection of informers from which its local best position is derived. The particles are linked to each other in a ring structure. For example, in an 6 particle swarm, A to F, with the number of informers set at two, particle A would be informed by particles F and B. Particle B will be informed by particles A and C and particle F would be informed by particles E and A. Another variation is to have discrete groups of informers and reorganize them after a certain number of iterations have passed without the global best value changing. This variation enables all the groups to be optimized concurrently on different threads and redistributed after a set number of epochs have passed without an improvement.

Image 2

Keeping the Particles Within the Problem Space

It is possible for the particles to fly outside the problem space, so, the first example, X could acquire a value greater than 10. There are several ways of dealing with this situation. The most simple is just to let them fly but do not allow the escaped particles update the pbest or lbest positions. The particles will be pulled back into the correct range under the influence of pbest and lbest.

The Code

The Particle class has the following constructor.

C#
public Particle(int dimensions,Func<double[], double> errorFunc,
                ParticleManager particleManager)
    {
        this.Dimensions = dimensions;
        this.particleManager = particleManager;
        this.errorFunc = errorFunc;
        this.Initialize();
    }

The problem to be solved is passed in as a Func<double[], double>. It returns an error value which the algorithm will try to miminise.The double[] contains all the parameters (dimensions) of the function. The ParticleManager class handles the maths.

C#
public double[] UpdateVelocity(IParticle particle, double[] bestLocalPosition)
      {
          var newVelocity = new double[particle.Velocity.Length];
          for (int j = 0; j < particle.Velocity.Length; ++j) // each component
                                                             // of the velocity
          {
              newVelocity[j] = (this.psoAttributes.W * particle.Velocity[j])
                               + (this.psoAttributes.C1 *
                                  this.randomFactory.NextRandomDouble()
                                  * (particle.BestPosition[j] - particle.Position[j]))
                               + (this.psoAttributes.C2 *
                                  this.randomFactory.NextRandomDouble()
                                  * (bestLocalPosition[j] - particle.Position[j]));
          }
          newVelocity.CopyTo(particle.Velocity, 0);
          return particle.Velocity;
      }

public double[] UpdatePosition(IParticle particle)
      {
          var newPosition = new double[particle.Position.Length];
          for (int j = 0; j < particle.Position.Length; ++j)
          {
              double newPositionJ = particle.Position[j] + particle.Velocity[j];

              newPosition[j] = newPositionJ;
          }
          newPosition.CopyTo(particle.Position, 0);
          return particle.Position;
      }

The Optimize functions are fairly straight forward. The number of consecutive static epochs is used to provide an early exit if no improvement is made. The attributes for the optimizer are set in the app.config file.

C#
   public double[] Optimize()
      {
          int epoch = 0;
          int staticEpochs = 0;
          while (epoch < this.psoAttributes.MaxEpochs &&
                 staticEpochs < psoAttributes.MaxStaticEpochs)
          {
              bool isErrorImproved = false;
              foreach (IParticle particle in this.Swarm)
              {
                  double error = particle.OptimizePosition();

                  if (error < this.BestGlobalError)
                  {
                      particle.Position.CopyTo(this.BestGlobalPosition, 0);
                      this.BestGlobalError = error;
                      isErrorImproved = true;
                      staticEpochs = 0;
                  }
              }
              if (!isErrorImproved)
              {
                  staticEpochs++;
              }

              epoch++;
          }
          return this.BestGlobalPosition;
      }

public double UpdateErrorValues()
      {
         if(this.particleManager.CheckIsInRange(this.Position))
           {
            double newError = this.particleManager.CalculateError
                              (this.Position,this.errorFunc);
             if (newError < this.BestError )
              {

              this.Position.CopyTo(this.BestPosition, 0);
              this.BestError = newError;
              }
             this.Error = newError;
         }
          return this.BestError;
      }

Example Application

The example Console application attempts to optimize four of the commonly used test functions.

  1. Beale's Function. The solution is at xd[0]=3 xd[1]=0.5
  2. Rosenbrock Function. The solution is at xd[0]=1 xd[1]=1
  3. Sphere Function. The solution is at xd[0]=0 xd[1]=0 xd[2]=0 xd[3]=0
  4. Griewank Function.The solution is at xd[0]=0 xd[1]=0 xd[2]=0 xd[3]=0
Image 3

Conclusion

Particle Swarm Optimizers are a useful addition to the growing collection of algorithms that employ a form of artificial intelligence to solve problems. They are particularly good at finding solutions to problems that use multiple, continuously variable values. But they can be adapted to solve problems with discrete values like The Travelling Salesman Problem, as I hope to be able to demonstrate in a future article.

References

History

  • 7th September, 2016: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Student
Wales Wales
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
QuestionHow to add equality constraint to PSO alogrithm? Pin
David HanCheng16-Jul-18 21:19
David HanCheng16-Jul-18 21:19 
SuggestionPossible Parallel Execution Pin
Anton Herzog13-Sep-16 17:58
Anton Herzog13-Sep-16 17:58 
GeneralRe: Possible Parallel Execution Pin
George Swan13-Sep-16 20:21
mveGeorge Swan13-Sep-16 20:21 

I can’t see why you should not be able to use multiple swarms running concurrently on different threads, Anton. I have tried running multiple informer groups asynchronously within the same swarm, but did not get a significant performance gain. It is quite easy to implement using the async await keywords. Something like

C#
public async Task RunAsync()
       {
           //.....
           while (epochs < maxEpochs)
           {
               var tasks = new List<Task<IRoute>>();
               foreach (var groupOptimizer in groupOptimizers)
               {
                   //start optimizing each group async
                   Task<IRoute> task = groupOptimizer.OptimizeGroupAsync(psoAttributes);
                   tasks.Add(task);
               }
               //wait for all tasks to finish
               // A task finishes after staticEpochs>maxStaticEpochs
               IRoute[] routes = await Task.WhenAll(tasks);
               //Get the best result
               foreach (IRoute route in routes)
               {
                   if (route.TourDistance < bestDistance)
                   {
                       bestDistance = route.TourDistance;
                       Console.Write(bestDistance + ",");
                   }
               }
               epochs++;
               //rebuild the groups
               groups = swarmManager.BuildInformerGroups(swarm, psoAttributes.MaxInformers, true);
               for (int i = 0; i < groupOptimizers.Count; i++)
               {
                   groupOptimizers[i].SetGroupTspParticle(groups[i]);
               }
           }
           //....
       }

Best wishes, George.


General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.