Click here to Skip to main content
16,021,432 members
Articles / Programming Languages / LaTeX

Numerical Methods with C++ Part 3: Root Approximation Algorithms

Rate me:
Please Sign up or sign in to vote.
5.00/5 (13 votes)
10 Jun 2019CPOL8 min read 15.4K   4   6
Implementation of Root Approximation Algorithms such as Bisection, Newton, Dekker and Brent

Welcome back to a new post on thoughts-on-cpp.com. This time, I would like to have a closer look at root approximation methods which I regularly use to solve numerous numerical problems. We start with an easy approach using Bisection, investigating in Newton and Secant method and are concluding with black-box methods of Dekker and Brent.

Preamble

In the following chapters, we have a closer look at several algorithms used for root approximation of functions. We will compare all algorithms against the nonlinear function f(x)=x^3-x^2-x-1  which has only one root inside a defined range [a,b] (this criterion is quite important, otherwise it might be hard to tell which solution we got from the algorithm). An additional criteria is defined by f(a) \cdot f(b) < 0  . Please keep in mind that we don’t check against the number of iterations we run. This might be necessary because we usually don’t want to run infinite loops. As usual, you can find all the sources at GitHub.

Bisection Method

Let’s start with a method which is mostly used to search for values in arrays of every size, Bisection. But it can be also used for root approximation. The benefits of the Bisection method are its implementation simplicity and its guaranteed convergence (if there is a solution, bisection will find it). The algorithm is rather simple:

  1. Calculate the midpoint m=\frac{a+b}{2}  of a given interval [a,b]
  2. Calculate the function result of midpoint m, f(m)
  3. If new m-a or |f(m)| is fulfilling the convergence criteria, we have the solution m
  4. Comparing sign of f(m) and replace either f(a) or f(b) so that the resulting interval is including the sought root

Bisection algorithm

Implementation of Bisection Algorithm

As we can see from the source code, the actual implementation of Bisection is so simple that most of the code is just there to produce the output. The important part is happening inside the while loop which is executing as long as the result of f(x) is not fulfilling the criteria f(x) < \epsilon  . The new position x is calculated in method calculateX() which is checking the sign of f(x) and, depending on its result, assigning x to a or b to define the new range which is including the root point. It is important to note is that the Bisection algorithm needs always results of f(a) and f(b) with different signs which is assured by the method checkAndFixAlgorithmCriteria(). The algorithm is rather slow with 36 iterations to reach the convergence criteria of \epsilon = 1\cdot10^{-10}  , but it will converge with enough given time, regardless what happens.

Results of Bisection Algorithm

Newton Method

As long as we also have the derivative of the function and the function is smooth, we can use the much faster Newton method where the next closer point to root x_{n+1}  can be calculated by:

x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}

Newton algorithm

Implementation of Newton Algorithm

Also, this algorithm is iterating as long as the result of f(x) is not fulfilling the criteria f(x) < \epsilon  . The method calculateX() is not only applying the formula of the Newton method, but it is also checking if the derivative of f(x) is getting zero. Because floating point numbers can’t be compared directly, we have to compare f'(x) against the smallest possible floating point number close to zero which we can get by calling std::numeric_limits<double>::min(). In such a case, where f'(x) is zero, the algorithm wouldn’t be able to converge (stationary point) as shown in the image below.

Newton algorithm with stationary condition

The advantage of the Newton method is its raw speed. In our case, it just needs 6 iterations until the algorithm reaches the convergence criteria of \epsilon = 1\cdot10^{-10}  . But the algorithm also has several drawbacks as implied above. Such as:

  • Only suitable if we know the derivative of a given function
  • Only suitable for smooth and continuously functions
  • If starting point x_0  is chosen wrong or the calculated point x_{n+1}  at or close to a local maximum or minimum, the derivative of f'(x) gets 0 and we have a stationary point condition.
  • If the derivative of a function is not well behaving, the Newton method tends to overshoot. Then x_{n+1}  might be way too far from root to be useful to the algorithm

Results of Newton Algorithm

Secant Method

In many cases, we don’t have, or it might be too complex, a derivative of a function f(x). In such cases, we can use the Secant method. This method basically does the same thing as the Newton method, but calculates the necessary slope not via its function derivative f'(x) but through the quotient of two x and y values calculated with f(x).

x_{n+1}=x_n-f(x_n)\frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})}

Secant method

Implementation of Secant Algorithm

This algorithm needs a range [a,b] which might include (not absolutely necessary but it helps) the root we are searching for. It uses the method calculateX() which not only calculates x_{n+1}  , but it also checks if the denominator f(x_n)-f(x_{n-1})  is 0. Also, this algorithm is very fast, it needs only 7 iterations in a well-chosen range and 10 iterations in a broader range. Both calculations fulfill the same convergence criteria of \epsilon = 1\cdot10^{-10}  . As well as the Newton method, this algorithm also has its drawbacks which we have to be aware of:

  • Only suitable for smooth and continuously functions
  • If one of the calculated points x_{n+1}  at or close to a local maximum or minimum, the derivative of f'(x) gets 0 and we have a stationary point condition.
  • If the calculated slope quotient of a function has a very low steepness, also the Secant method tends to overshoot. Then x_{n+1}  might be way too far from root to be useful to the algorithm.
  • If the range [a,b] is chosen wrong, e.g., including local minimum and maximum, the algorithm tends to oscillation. In general, minimum and maximum, local or global, are in some cases a problem for the Secant method

Results of Secant Algorithm

Dekker Method

The idea of Dekker’s method is now to combine the speed of the Newton/Secant method with the convergence guarantee of Bisection. The algorithm is defined as the following:

s=\left\{ \begin{matrix} b_k-f(b_k)\frac{b_k-b_{k-1}}{f(b_k)-f(b_{k-1})}, & if \: f(b_k) \neq f(b_{k-1}) \\ m & otherwise \end{matrix} \quad \right.

m=\frac{a+b}{2}

  1. b_k  is the current iteration guess of the root and a_k  is the current counterpoint of b_k  such that f(a_k)  and f(b_k)  have opposite signs.
  2. The algorithm has to fulfill the criteria f(b_k) \leq f(a_k)  such that b_k  is the closest solution to the root
  3. f(b_{k-1})  is the last iteration value, starting with b_{k-1}=a_k  at the beginning of the algorithm
  4. If s is between m and b_k  then b_{k+1}=s  , otherwise b_{k+1}=m
  5. If f(a_k)f(b_{k+1}) > 0  \wedge f(b_k)f(b_{k+1}) < 0  then a_{k+1} = b_k  otherwise a_{k+1} =  a_k

Implementation of Dekker Algorithm

As we can see in the implementation of Dekker’s method, we always calculate both values, m and s, with the methods calculateSecant() and calculateBisection() and assign the results depending on what the method useSecantMethod() confirms if s is between m and b_k  (as in rule 4 defined). In line 32-33, we confirm if the resulting values of the function f(x) fulfill rule number 5. Because of the Bisection, we have to check and correct after each iteration if the condition of $latex f(a_k)f(b_k) < 0 &s=1 & is still accomplished which is done with the method checkAndFixAlgorithmCriteria().

Results of Dekker Algorithm

The Dekker algorithm is as fast as the Newton/Secant method but also guarantees convergence. It takes 7 iterations in case of a well-chosen range and 9 iterations in case of a broader range. As we can see, the Dekker algorithm is very fast but there are examples where |b_k-b_{k+1}|  is extremely small but is still using the secant method. In such cases, the algorithm will take even more iterations as the pure Bisection would take.

Brent Method

Because of Dekker’s slow convergence problem, the method was extended by Brent which is now known as Brent’s method or Brent-Dekker-Method. This algorithm is extending Dekker’s algorithm by using four points (a, b, b_{k-1}  and b_{k-2}  ) instead of just three points, additional Inverse Quadratic Interpolation instead of just linear interpolation and Bisection, and additional conditions to prevent slow convergence. The algorithm decides with the following conditions which of the methods to use, Bisection, Secant method or Inverse Quadratic Interpolation:

  • Bisection (B = last iteration was using Bisection)
    • B \wedge |s-b_k| \geq (b_k-b_{k-1})/2  !B \wedge |s-b_k| \geq (b_{k-1}-b_{k-2})/2  B \wedge |b_k-b_{k-1}|< |\delta|  !B \wedge |b_{k-1}-b_{k-2}|< |\delta|
  • Inverse Quadratic Interpolation
    • f(a_k) \neq f(b_{k-1}) \wedge f(b_k) \neq f(b_{k-1})
  • In all other cases use the Secant method

Implementation of Brent Algorithm

With these modifications, the Brent algorithm is at least as fast as Bisection but in best cases slightly faster than using the pure Secant method. The Brent algorithm takes 6 iterations in case of a well-chosen range and 9 iterations in case of a broader range.

Results of Brent Algorithm

But Which Algorithm to Choose?

We have seen five different algorithms we can use to approximate the root of a function, Bisection, Newton method, Secant method, Dekker, and Brent. All of them have different possibilities and drawbacks. In general, we could argue which algorithm to use as the following:

  • Use the Newton method in case of smooth and well- behaving functions where you also have the function’s derivative
  • Use the Secant method in case of smooth and well-behaving functions where you don’t have the function’s derivative
  • Use the Brent method in cases where you’re not sure or know your functions have jumps or other problems.
Algorithm Start/Range No. Of Iterations
Bisection [0, 2] 36
Newton x0 = 1.5 6
Secant Good [1.5, 2] 7
Secant Bad [0, 2] 10
Dekker Good [1.5, 2] 7
Dekker Bad [0, 2] 9
Brent Good [1.5, 2] 6
Brent Bad [0, 2] 9

Did you like the post?
What are your thoughts? Did you like the post?
Feel free to comment and share this post.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Team Leader KISSsoft AG
Switzerland Switzerland
Passionate C++ developer, mechanical engineer, Head of Software Development at KISSsoft AG and host of https://thoughs-on-coding.com

Comments and Discussions

 
Questionwhere is your source code file? Pin
Southmountain25-Jul-19 9:09
Southmountain25-Jul-19 9:09 
AnswerRe: where is your source code file? Pin
kalpitkatapra27-Oct-23 13:29
kalpitkatapra27-Oct-23 13:29 
QuestionAuto import function not working Pin
Member 156562717-Jul-19 12:50
Member 156562717-Jul-19 12:50 
PraiseVery Nice, I used to use these methods for solving JPL orbital formulae and financial options/futures Pin
Midnight48911-Jun-19 5:38
Midnight48911-Jun-19 5:38 
GeneralMessage Closed Pin
11-Jun-19 19:22
thoughts-on-coding11-Jun-19 19:22 
QuestionNice presentation Pin
YvesDaoust10-Jun-19 23:39
YvesDaoust10-Jun-19 23:39 
AnswerMessage Closed Pin
11-Jun-19 19:21
thoughts-on-coding11-Jun-19 19:21 
BugSomehow autoimport of source code (GitHub) seems not to work anymore Pin
thoughts-on-coding10-Jun-19 20:08
thoughts-on-coding10-Jun-19 20:08 
@Deeksha-Shenoy the auto import of source code seems not to work anymore. Any suggestions?

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.