Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C#

Solving Differential Equations - Numerical Integration and stability

5.00/5 (11 votes)
11 Apr 2023CPOL12 min read 7.9K   227  
Practical implementations of the integrator expanded with more examples
Simulations of Pendulum equation, Double pendulum equation, Duffing's equation, van der Pol's equations, Lorenz equations, Wave equation with different boundary conditions

Image 1

Introduction

This article has been made while I was going back and fort on what to write about. I first wrote that a certain element will just be mentioned and not implemented in the code, then only to change my mind and implement it later and having to rewrite the text. The topic of numerical implementation is an insanely large subject, and trying to write or just even to mention all major elements that you should think about is a major undertaking. I realize that I will have to write more articles to cover more subjects, but for now onward to this article at hand.

How accurate must a result be in order to be useful for us? Ever since the dawn of time improved accuracy of instrument has generally been a driving force for better utilization of human recourses. But once we push the boundaries of what is visible or easily checked in the real world, can we do something to assure ourselves that the result we get is close to the expected outcome.

Oh, how the numbers dance and play In our quest to find the way To solve the differential equations That plagues us with their vexations.

We seek the accuracy divine To tame the chaos and align The paths of all our wandering curves And find the solutions we deserve

But oh, the pitfalls we must face As we strive to keep the pace Of numerical methods galore And their errors we cannot ignore

For every step we take with care May lead us to a pitfall there Where the solution we thought was sound Suddenly crumbles to the ground

Yet we persist with hearts afire To quench our thirst for truth's desire And with each failure, we rise again To chart the course with might and main

So let us heed the lessons learned And strive for accuracy that's earned With every step we take in stride And in the end, we'll surely bide

For though the numbers may confound And errors oft be found around We'll find the truth, and it will ring Through time and space, and everything.

Simulations and discussions of mathematics have generally been going about for centuries. They all have roughly the same thing to say: Arnold Somerfield said that we can only hope to understand the average behavior of the world, since we generally cannot simulate the erratic behavior of every particle. While said something like that, it is amazing that mathematics can be used to describe nature. If every time you do something the result is different we cannot learn anything, so we cheat. Like Newton did, we make the best out of the situation and find a curve close enough to the situation at hand. Once the cure is fitted, we do our work and try to extract some repeatable behavior that we can understand.

In this ongoing series about solving differential equations, I just generated a class used to do explicit Runge-Kutta style integration.

The Integrator

The integrator of these explicit methods like Runge-Kutta etc. have in general all some area where they will give accurate result. If the time step is above some threshold, it will no longer work as expected and in general higher order methods give you the opportunity to increase the step size without losing accuracy. There is a method which is stable for all timesteps, namely the backward Euler method. This is an implicit method, meaning that you have the value you want to find on both sides of the equation. To solve these equations you generally require some Newton-Raphson or other fixed point iteration techniques in order to get to the result. I will not go though these implicit methods in this article, but I think its important to know that they exist, since so called stiff ODEs cannot accurately be solved with explicit methods alone, usually. There do also exist Butcher tableau's that include result form implicit methods as well as explicit, for instance the Crank-Nicolson method.

Standard high order explicit integrators of the Runge-Kutta style have long been known to have low impact versions. This is generally preferable since integration is usually used on large grids so lower storage usually also generate faster methods as well. Over the years, many different low storage explicit RK schemas with varying order have been given, and a rather large collection of multiple low and lower impact RK methods are given in this article along with some general advice regarding implementation and construction of methods. Some of the lower impact methods have been added to the integrator, but the code only weakly utilizes this fact right now and there are more efficient strategies.

Also, I needed more accuracy than the double class could offer, so I ended up developing integration calls for both double and decimal accuracy.

Speaking of accuracy, there are other integrators that are used for instance in Hamiltonian equations, namely the symplectic integrators. Typically, it is recommended to use these in long running integrations of the double pendulum equation and such. To not also that symplectic integrators can also be on the Runge-Kutta methods. The main idea is to utilize the way the system behaves and split the integration up into parts and use integrators for each part that minimize errors.

General Setup

The general formulation for a differential equation to be used by this integrator, is to set the differential equation of as a system of first derivatives. The system could be written as:

$ \begin{equation} \begin{split} y_1' &= y_2 \\ y_2' &= y_3 \\ y_3' &= y_4 \\ \vdots \\ y_n' &= f(t,y_1, \cdots, y_{n+1}) \end{split} \end{equation}$

Illustrating the procedure with the second order differential equation of the pendulum:

$ m \cdot L \cdot y'' + m \cdot g \cdot \sin(y) = 0 $

We transform this equation into a system of first derivatives:

$ \begin{split} y_1' &= y_2 \\ y_2' &= - \frac{g}{L} \sin(y_1) \end{split} $

Let me show you one other second order differential equation to set up in this system as well. Take Van der Pols equation, which is very similar to the pendulum equation, except that it is non linear. Equation of this type can behave in quite surprising ways.

$ y'' + \mu (y^2 - 1) \cdot y' + y = 0 $

Implementation of this is pretty straightforward:

C#
/// <summary>
/// The derivatives
/// </summary>
/// <param name="t">Time dependent variable in f'(t,x)</param>
/// <param name="y">Spece dependent variable in f'(t,x)</param>
/// <returns>f'(t,x)</returns>
public decimal[] ydot(decimal[] y, decimal t)
{
    decimal[] z = new decimal[2]
    {
        y[1],
        -Mu*(y[0]*y[0] - 1 )*y[1] - y[0]
    };
    return z;
}

Result of simulation:

Image 2

Another implementation of a second degree differential equation would be Duffing's equation or Duffing oscillator which is also a non-linear second order differential equation. Depending on input parameters, it too can have chaotic behavior.

$ y'' + \delta \cdot y' + \alpha \cdot y + \beta \cdot y^3 - \gamma \cos(\omega \cdot t) = 0 $

The different variables can in short describe a mass hanging in a non linear spring:

  • Damping - \( \delta \)
  • Stiffness - \( \alpha \)
  • Non-linearity - \( \beta \)
  • Amplitude - \( \gamma \)
  • Angular force frequency - \( \omega \)

The system of equations:

$ \begin{split} y_1' &= y_2 \\ y_2' &= \gamma \cos(\omega \cdot t) - \delta \cdot y_2 - \alpha \cdot y_1 - \beta \cdot y_1^3 \end{split} $

To implement this differential equation is straightforward:

C#
/// <summary>
/// The derivatives
/// </summary>
/// <param name="t">Time dependent variable in f'(t,x)</param>
/// <param name="y">Spece dependent variable in f'(t,x)</param>
/// <returns>f'(t,x)</returns>
public decimal[] ydot(decimal[] y, decimal t)
{
    decimal[] z = new decimal[2]
    {
        y[1],
        Amplitude*(decimal)Math.Cos((double)Omega *
           (double)t) - Damping*y[1] - Alpha*y[0] - Beta*y[0]*y[0]*y[0]
    };
    return z;
}

The result of one simulation:

Image 3

With unstable and non linear equation, the bifurcation map might reveal a pattern in what otherwise seems like a chaotic behavior. You see clearly that attractors appear, meaning points that the solution likes to be near.

Wave Equation - Boundary Conditions

We return to our prodigal son, so to speak. The wave equation is regarded as the first time Newtons laws actually was used to form a differential equation. It was Taylor paper in 1701 where this was first posted. He generated the solution for the fundamental frequency for a string based on Newtons laws.

In the previous article, I just took the wave equation and made a central finite difference matrix in the x direction.

$ D_{xx} = \begin{bmatrix} 0 & 0 & \cdots & 0 & 0 \\ 1 & -2 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & 1 & -2 & 1 \\ 0 & 0 & \cdots & 0 & 0 \\ \end{bmatrix} $

The matrix is with a hard boundary condition and will preserve all energy that was originally put into the system. Finite difference schemas can always be constructed by lower order forwards and backwards finite difference object:

$ D_{xx} = D_{+x} + D_{-x} $
$ D_{-x} = \begin{bmatrix} 0 & 0 & \cdots & 0 & 0 \\ 1 & -1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & 1 & -1 & 0 \\ 0 & 0 & \cdots & 0 & 0 \\ \end{bmatrix} $
$ D_{+x} = \begin{bmatrix} 0 & 0 & \cdots & 0 & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & 0 & -1 & 1 \\ 0 & 0 & \cdots & 0 & 0 \\ \end{bmatrix} $

One can see the effect in some of Walter Lewins lectures.

Sitting in a bathtub, you will definitely not see the wave going beneath the bathtub, instead it will climb on the edges of your bathtub and descend. A small correction of the difference matrix will yield this result, which implements Neumann boundary conditions.

$ D_{xx} = \begin{bmatrix} -3/2 & 4/2 & -1/2 & \cdots & 0 \\ 1 & -2 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & 1 & -2 & 1 \\ 0 & \cdots & -1/2 & 4/2 & -3/2 \\ \end{bmatrix} $

Speaking of boundaries that are important to be able to implement is the open end boundary condition. If you remember the factorization of the differential Wave equation, it gave two separate differential equations. Now we exploit this in order to implement an open boundary condition.

In the differential 2nd degree finite difference approximation, I used forward and backward difference in order to generate it. Now I want to generate a differential that simulates this, meaning that the first derivative at the boundary is always zero. We can, in fact, generate a difference scheme for the first derivative as well as for the second derivative:

$ D_{x} = \begin{bmatrix} -3/2 & 4/2 & -1/2 & \cdots & 0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & \cdots & 1 & 0 & 0 \\ 0 & 0 & \cdots & 0 & 1 & 0 \\ 0 & \cdots & 0 & -1/2 & 4/2 & -3/2 \\\end{bmatrix} $

This matrix must be multiplied with the speed of sound to generate the accurate boundary condition. Remember the second order differential matrix is already multiplied with the speed of sound squared.

C#
/// <summary>
/// Derivative function
/// </summary>
/// <param name="y"></param>
/// <param name="t"></param>
/// <returns></returns>
public decimal[,] ydot(decimal[,] y, decimal t)
{
    decimal[,] z = new decimal[y.GetLength(0), y.GetLength(1)];
    decimal[] y_t = new decimal[y.GetLength(1)];

    // Extract first and second order initial conditions
    Parallel.For (0, y.GetLength(1), i=> {

         z[0, i] = y[1, i];
         y_t[i] = y[0, i];
    });

    // Multiply the 2nd order differential matrix with U_0
    decimal[] y_new = DifferentialMatrix * y_t;

    Parallel.For(0, y.GetLength(1), i =>
    {
        z[1, i] = y_new[i];
    });

    // Generate a simplified first order derivative matrix
    decimal[] dx = new decimal[]
    {-49M / 20M, 6M, -15M / 2M, 20M / 3M, -15M / 4M, 6M / 5M, -1M / 6M };

    int end = y.GetLength(1) - 1;

    Parallel.For(0, dx.GetLength(0), i =>
    {
        // Left side open ended
        z[0, 0] += SpeedOfSound * y_t[i] * dx[i];
        // Right side open ended
        z[0, end] += SpeedOfSound * y_t[end - i] * dx[i];
    });

    return z;
}

With these conditions, we can also introduce a reflection coefficient at the boundary to simulate a transition from one medium to another. We can also set it to zero and get a fixed end condition again.

C#
public decimal Left_Reflection = 1;
public decimal Right_Reflection = 1;

/// <summary>
/// Derivative function
/// </summary>
/// <param name="y"></param>
/// <param name="t"></param>
/// <returns></returns>
public decimal[,] ydot(decimal[,] y, decimal t)
{
    decimal[,] z = new decimal[y.GetLength(0), y.GetLength(1)];
    decimal[] y_t = new decimal[y.GetLength(1)];

    // Extract first and second order initial conditions
    Parallel.For (0, y.GetLength(1), i=> {

         z[0, i] = y[1, i];
         y_t[i] = y[0, i];
    });

    // Multiply the 2nd order differential matrix with U_0
    decimal[] y_new = DifferentialMatrix * y_t;

    Parallel.For(0, y.GetLength(1), i =>
    {
        z[1, i] = y_new[i];
    });

    // Generate a simplified first order derivative matrix
    // and multiply it with U_t
    decimal[] dx = new decimal[] {-49M / 20M, 6M, -15M / 2M,
                   20M / 3M, -15M / 4M, 6M / 5M, -1M / 6M };

    int end = y.GetLength(1) - 1;

    Parallel.For(0, dx.GetLength(0), i =>
    {
        // Left side open ended
        z[0, 0] += Left_Reflection * SpeedOfSound * y_t[i] * dx[i];
        // Right side open ended
        z[0, end] += Right_Reflection * SpeedOfSound * y_t[end - i] * dx[i];
    });

    return z;
}

In order to get a completely accurate differential matrix, I should divide the first order differential matrix by delta T (dt) and the second order differential matrix by delta T^2 (dt^2). This can be implemented by simply set SpeedOfSound /= dt before any of the matrices are generated.

Image 4

Speaking of stability of the solver is quite complicated for the Wave equation since it involves analyzing the stability for the difference matrices as well as the integrator. Also, if you read the last article, I used a double variables in my integrator and to solve the problems posted there. As it turns out, solving the wave equation with all the steps I want to include requires more precision, so all variables have been exchanged from double to decimal precision. This has hampered the speed but improved accuracy.

It is also about the general stability of the explicit integrator, and this is dependent on the step size in time being low enough. With explicit methods, a too high step size will not give the correct result since the integrator cannot handle that. However, there is a method called backwards Euler's method that is unconditionally stable for any step size, but it is more complicated to implement.

The difference operators can also be analyzed with regards to stability by looking at the eigenvalues of the matrix. Since they are repeatedly implemented, they also have the capacity to blow up if these eigenvalues are not bounded.

Lorentz Equations

Navier-Stokes equations have always assumed to be correct since we can derive them from relative simple physical laws. With a little reformulation, you can get the wave equation, and with further simplifications, you can get the Lorenz equations.

The theory behind this equation is quite fascinating as it involves some computer history. When the first computers came in use in natural research around 1960 at MIT Lorentz, who was a mathematician and a metrologist, was simplifying the Navier Stokes equations in order to preform some simulations of particle moment in the upper atmosphere. The equations was Navier-Stokes, so Lorentz simplified them to make them fit on a computer. The result was a mess, it had the particle running everywhere. And worse the result was stored in a different format then he was simulating it on, so when he restarted the calculations he got completely different results. This lead Lorenz to say that a butterfly could alter the weather and cause a tornado to appear.

The equations are rather simple:

C#
/// <summary>
/// The derivatives
/// </summary>
/// <param name="t">Time dependent variable in f'(t,x)</param>
/// <param name="y">Space dependent variable in f'(t,x)</param>
/// <returns>f'(t,x)</returns>
public decimal[] ydot(decimal[] y, decimal t)
{
    // Prandtl number
    decimal s = 10;
    // Rayleigh number
    decimal r = 28;
    // System parameter
    decimal b = 8M / 3M;

    decimal[] z = new decimal[3]
    {
        -s*y[0] + s*y[1],
        -y[0] *y[2] + r*y[0]-y[1],
        y[0]*y[1]- b*y[2]
    };
    return z;
}

And here, I plot the results in a plane and also simultaneously move a ball in 3D space so you can see the movement is very erratic even though the plot eventually looks quite predictable. I used a fellow CodeProject creators' work to generate the 3D ball.

Image 5

Double Pendulum

These equations are developed using Lagrangian (or Hamiltonian) mechanics, meaning conservation of potential and kinetic energy. Equations of this kind become very tedious to set up, I mean just look at the derivation here, and the final implementation in code:

C#
/// <summary>
/// The derivatives
/// </summary>
/// <param name="t">Time dependent variable in f'(t,x)</param>
/// <param name="y">Spece dependent variable in f'(t,x)</param>
/// <returns>f'(t,x)</returns>
public decimal[] ydot(decimal[] y, decimal t)
{
    decimal a = -g * (2 * M_1 + M_2) * (decimal)Math.Sin((double)y[0]);
    decimal b = -g * M_2 * (decimal)Math.Sin((double)(y[0] - 2 * y[2]));
    decimal c = -2M * (decimal)Math.Sin((double)(y[0] - y[2])) *
                 M_2 * (y[3] * y[3] * L_2 - y[1] * y[1] * L_1 *
                 (decimal)Math.Cos((double)(y[0] - y[2])));
    decimal d = (2 * M_1 + M_2 - M_2 * (decimal)Math.Cos((double)
                (2 * y[0] - 2 * y[2])));

    decimal a1 = 2 * (decimal)Math.Sin((double)(y[0] - y[2]));
    decimal b1 = y[1] * y[1] * L_1 * (M_1 + M_2);
    decimal b2 = g * (M_1 + M_2) * (decimal)Math.Cos((double)y[0]);
    decimal b3 = y[3] * y[3] * L_2 * M_2 *
                 (decimal)Math.Cos((double)(y[0] - y[2]));

    decimal[] z = new decimal[4]
    {
        y[1],
        (a + b + c)/ (L_1*d) - Damping*y[1] ,
        y[3],
        (a1*( b1 + b2 + b3))/(L_2*d)- Damping *y[3]
    };

    return z;
}

This is also prone to erratic and chaotic behavior.

Image 6

History

  • 11th March, 2023: First posted

References

  • "Nonlinear dynamics and chaos" by Steven Strongatz
  • "First course in the numerical analysis of differential equation" by Arieh Iserles
  • "Numerical analysis" by Timothy Sauer
  • "Ordinary differential equations" by J. C: Butcher

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)