Open Journal of Mathematical Sciences
ISSN: 2523-0212 (Online) 2616-4906 (Print)
DOI: 10.30538/oms2017.0003
Some multi-step iterative methods for solving nonlinear equations-I
Muhammad Saqib\(^1\), Muhammad Iqbal
Department of Mathematics, Lahore Leads University, Lahore-54590, Pakistan. (M.S & M.I)
\(^{1}\)Corresponding Author: saqib270@yahoo.com
Abstract
Keywords:
1. Introduction
To find the root of nonlinear equation of the form \(f(x)=0,\) is the oldest and basic problem in numerical analysis. Newton's method is one of oldest and most powerful formula to approximate the root of nonlinear equations. It has second order convergence. Many modifications in Newton's method have been made to increase its convergence order using various techniques. Recently, many iterative methods with higher order convergence have been established using different techniques like Taylor's series, Adomain decomposition, homotopy, modified homotopy, decomposition, modified decomposition, interpolation, quadrature rules and their many modifications. see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] and references therein. Chun [3] introduced some multi-step iterative methods using Adomain decomposition. These method requires higher order derivatives. Later on, Noor [4, 5] established some multi-step iterative methods that do not require higher order derivative using a different technique.
In this paper, some numerical methods based on decomposition technique are proposed for solving algebraic nonlinear equations. For this purpose, we write nonlinear equations as an equivalent system of equations and use technique introduced by Gejji and Jafari [9]. Recently, several iterative methods have been suggested by writing nonlinear equations as an equivalent system of equations and then using different techniques. In section 3, we give detailed proof regarding convergence of our newly established iterative methods. In the last section, numerical results are given to make comparison of these algorithm with some classical methods.
2. Iterative methods
Consider the nonlinear equationAlgorithm 2.1: For any initial value \(x_{0},\) we compute the approximation solution \(x_{n+1}\), by the iteration scheme; \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})} \\ x_{n+1} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})}. \end{eqnarray*} This algorithm has third order convergence and has been established by Frontini and Sormani [1]. When \begin{eqnarray*} x &\approx &x_{0}+x_{1} \\ &=&\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})}-\frac{g(x_{0}) }{f^{\prime }(\frac{x+\gamma }{2})}. \end{eqnarray*} From Eq. (3), we see that \[ g(x_{0})=f(x_{0}), \] by substituting in above, we get \[ x=\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})}-\frac{f(x_{0}) }{f^{\prime }(\frac{x_{0}+\gamma }{2})}, \] Using this, we suggest the following three step iterative methods for solving Eq. (1) as follows;
Algorithm 2.2: For any initial value \(x_{0},\) we compute the approximation solution \(x_{n+1}\), by the iteration scheme; \begin{eqnarray*} \text{ Predictor Steps;}\;\;\;\; y_{n} &=&x_{n}-\frac{f(x_{n})}{% f^{\prime }(x_{n})} \\ z_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})} \end{eqnarray*} \[ \text{Corrector Step; }\;\;\;\; x_{n+1}=z_{n}-\frac{f(z_{n})}{% f^{\prime }(\frac{x_{n}+z_{n}}{2})} \] When \begin{eqnarray*} x &\approx &x_{0}+x_{1}+x_{2} \\ &=&x_{0}+N(x_{0}+x_{1}) \\ &=&x_{0}-\frac{g(x_{0}+x_{1})}{f^{\prime }(\frac{x_{0}+x_{1}+\gamma }{2})} \\ &=&x_{0}-\frac{f(x_{0}+x_{1})+f(x_{0})}{f^{\prime }(\frac{x_{0}+x_{1}+\gamma }{2})} \end{eqnarray*} From this relation, we formulate four step iterative method for solving Eq. (1) as follows;
Algorithm 2.3: For any initial value \(x_{0},\) we compute the approximation solution \(x_{n+1}\), by the iteration scheme; \begin{eqnarray*} \text{ Predictor Steps;} \;\;\;\; y_{n} &=&x_{n}-\frac{f(x_{n})}{% f^{\prime }(x_{n})} \\ z_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})} \\ u_{n} &=&z_{n}-\frac{f(z_{n})}{f^{\prime }(\frac{x_{n}+z_{n}}{2})} \end{eqnarray*} \[ \text{Corrector Step;} \;\;\;\; x_{n+1}=z_{n}-\frac{f(u_{n})+f(z_{n})% }{f^{\prime }(\frac{x_{n}+u_{n}}{2})} \]
3. Convergence Analysis
In this section, we discuss in detail the convergence analysis of Algorithm 2.2 and 2.3 established above.Theorem 3.1. For an open interval \(I \subset \mathbb{R}\) let \(f:I \rightarrow\mathbb{R}\) and \(\alpha \in I\) be simple zero of \(f(x)=0.\) If \(f\) is differentiable and \(x_{0}\) is sufficiently close to \(\alpha\) then three-step iterative method defined by Algorithm 2.2 has fourth order convergence.
Proof. Expanding \(f(x)\) by Taylor's series about \(\alpha ,\) we have
Theorem 3.2. Let \(f:I\subset \mathbb{R}\rightarrow\mathbb{R}\) for an open interval \(I\) and \(\alpha \in I\) be simple zero of \(f(x)=0.\) If \(f\) is differentiable and \(x_{0}\) is sufficiently close to \(\alpha\) then three-step iterative method defined by Algorithm 2.3 has fifth order convergence and satisfies the error equation \[ x_{n+1}=\alpha +(-\frac{1}{4}c_{3}c_{2}^{2}+c_{2}^{4})e_{n}^{5}+O(e_{n}^{6}),% \text{ and }e_{n}=x_{n}-\alpha. \]
Proof. From Eq. (17), we have \[ u_{n}=\alpha +(4c_{2}c_{3}-4c_{2}^{3}-\frac{1}{2}c_{4})e_{n}^{4}+(\frac{27}{8% }c_{3}^{2}-\frac{73}{4}c_{3}c_{2}^{2}+\frac{5}{2}c_{2}c_{4}+10c_{2}^{4}-% \frac{11}{16}c_{5})e_{n}^{5}+..., \] Expanding \(f(u_{n})\) and \(f^{\prime }(\frac{x_{n}+u_{n}}{2})\) by Taylor's series about \(\alpha ,\)
4. Applications
In this section, we present some numerical examples to examine the validity and efficieny of our newly devolped algorithms namely, Algorithm 2.2 and Algorithm 2.3. We also provide comparison of these algorithms with Newton's method(NM), Abbasbandy's method (AM), Householder's method(HHM), Halley's method (HM), Chun's method (CM) [3] and Noor's method(NR)[4]. We use \(\epsilon =10^{-25}\). We use the following stopping criteria; \vskip-1cm \begin{eqnarray*} (i)\text{}\;\;\;\; |x_{n}-x_{n-1}| &<&\in \\ (ii)\text{}\;\;\;\;|f(x_{n})| &<&\in \end{eqnarray*} We consider the following nonlinear scalar equations for comparison: \begin{eqnarray*} f_{1}(x) &=&\sin ^{2}x-x^{2}+1=0 \\ f_{2}(x) &=&x^{2}-e^{x}-3x+2=0 \\ f_{3}(x) &=&\cos x-x=0 \\ f_{4}(x) &=&(x-1)^{3}-1=0 \\ f_{5}(x) &=&x^{3}-10=0 \\ f_{6}(x) &=&e^{x^{2}+7x-30}-1=0. \end{eqnarray*} Comparison Table \begin{array}{ccccc} \text{Examples} & & \text{Iterations} & x_{n} & f(x_{n}) \\ f_{1},\text{ }x_{0}=1 & & & & \\ NM & & 7 & 1.40449164831534122635086 & -1.05e^{-50} \\ AM & & 5 & 1.40449164831534122635086 & -5.82e^{-54} \\ HM & & 4 & 1.40449164831534122635086 & -5.45e^{-63} \\ CM & & 5 & 1.40449164831534122635086 & -2.01e^{-64} \\ HHM & & 6 & 1.40449164821534122603508 & 1.82e^{-25} \\ NR & & 5 & 1.40449164821534122603508 & 9.23e^{-26} \\ \text{Alg. }2.2 & & 3 & 1.40449164821534122603508 & 3.40e^{-25} \\ \text{Alg. }2.3 & & 3 & 1.40449164821534122603509 & 2.32e^{-44} \\ f_{2},\text{ }x_{0}=2 & & & & \\ NM & & 6 & 0.257530285439860760455367 & 2.94e^{-55} \\ AM & & 5 & 0.257530285439860760455367 & 1.03e^{-63} \\ HM & & 5 & 0.257530285439860760455367 & 0 \\ CM & & 4 & 0.257530285439860760455367 & 1.05e^{-62} \\ HHM & & 5 & 0.257530285439860760455367 & -6.01e^{-25} \\ NR & & 5 & 0.257530285439860760455367 & 1.08e^{-23} \\ \text{Alg. }2.2 & & 3 & 0.257530285439860934975933 & 1.72e^{-69} \\ \text{Alg}.2.3 & & 2 & 0.257530285439860934975933 & 1.01e^{-29} \\ \end{array} \begin{array}{ccccc} \text{Examples} & & \text{Iterations} & x_{n} & f(x_{n}) \\ f_{3},\text{ }x_{0}=1.7 & & & & \\ NM & & 5 & 0.739085133215160641655372 & -2.04e^{-31} \\ AM & & 4 & 0.739085133215160641655372 & -7.13e^{-47} \\ HM & & 4 & 0.739085133215160641655372 & -5.04e^{-58} \\ CM & & 4 & 0.739085133215160641655372 & 0 \\ HHM & & 4 & 0.739085133215160641655372 & 3.91e^{-17} \\ NR & & 4 & 0.739085133215160645628855 & 6.66e^{-18} \\ \text{Alg. }2.2 & & 3 & 0.739085133215160641655312 & 3.21e^{-50} \\ \text{Alg.}2.3 & & 3 & 0.739085133215160641655312 & 1.21e^{-100} \\ f_{4},\text{ }x_{0}=1.5 & & & & \\ NM & & 8 & 2.0000000000000000000001234 & 2.05e^{-42} \\ AM & & 5 & 2 & 0 \\ HM & & 5 & 2 & 0 \\ CM & & 5 & 2 & 0 \\ HHM & & 7 & 2.000000000000000000000828 & 2.47e^{-22} \\ NR & & 5 & 2.000000000000000000000302 & 1.12e^{-24} \\ \text{Alg. }2.2 & & 4 & 2.000000000000000000000007 & 1.83e^{-44} \\ \text{Alg. }2.3 & & 4 & 2.000000000000000000000000 & 5.07e^{-27}\\ f_{5},\text{ }x_{0}=1.5 & & & & \\ N.M. & & 7 & 2.154434690031883721759235 & 2.05e^{-54} \\ A.M. & & 5 & 2.154434690031883721759235 & -5.01e^{-63} \\ H.M. & & 5 & 2.154434690031883721759235 & -5.03e^{-62} \\ C.M. & & 5 & 2.154434690031883721759235 & -5.01e^{-64} \\ H.H.M. & & 6 & 2.154434690031883721759293 & 7.86e^{-27} \\ NR.M. & & 4 & 2.154434690031883721759292 & 8.98e^{-24} \\ \text{Alg. }2.2 & & 3 & 2.154434690031883721759294 & 2.00e^{-27} \\ \text{Alg. }2.3 & & 3 & 2.154434690031883721759293 & 1.33e^{-50} \\ f_{6,}\text{ }x_{0}=3.5 & & & & \\ N.M. & & 13 & 3.000000000000000000000000 & 1.52e^{-48} \\ A.M. & & 7 & 3.000000000000000000000000 & -4.32e^{-48} \\ H.M. & & 8 & 3.000000000000000000000000 & 2.01e^{-62} \\ C.M. & & 8 & 3.000000000000000000000000 & 2.05e^{-62} \\ H.H.M. & & 12 & 3.000000000000000000000000 & 5.47e^{-25} \\ NR.M. & & 6 & 3.00000010285594873990664 & 1.34e^{-06} \\ \text{Alg. }2.2 & & 4 & 3.00000000000000000000000 & 4.76e^{-30} \\ \text{Alg. }2.3 & & 4 & 3.00000000000000000000000 & 1.99e^{-102}% \end{array}5. Conclusion
We have established two new algorithms of fourth and fifth order convergence by using modified decomposition technique to coupled system. We have discussed convergence analysis of these newly established algorithms. Computational comparison of these algorithms has been made with some well known methods for solving nonlinear equations. From numerical table, we see that our new methods converge fast to the true solution of equations and are comparable with some classical methods.Competing Interests
The author(s) do not have any competing interests in the manuscript.References
- Frontini, M., & Sormani, E. (2003). Some variant of Newton's method with third-order convergence. Applied Mathematics and Computation, 140(2), 419-426.[Google Scholor]
- Adomian, G. (1988). Nonlinear stochastic systems theory and applications to physics (Vol. 46). Springer Science & Business Media.[Google Scholor]
- Chun, C. (2005). Iterative methods improving newton's method by the decomposition method. Computers & Mathematics with Applications, 50(10), 1559-1568. [Google Scholor]
- Noor, M. A., & Noor, K. I. (2006). Some iterative schemes for nonlinear equations. Applied Mathematics and Computation, 183(2), 774-779. [Google Scholor]
- Noor, M. A., Noor, K. I., Mohyud-Din, S. T., & Shabbir, A. (2006). An iterative method with cubic convergence for nonlinear equations. Applied Mathematics and Computation, 183(2), 1249-1255. [Google Scholor]
- Babolian, E., & Biazar, J. (2002). On the order of convergence of Adomian method. Applied Mathematics and Computation, 130(2), 383-387. [Google Scholor]
- Weerakoon, S., & Fernando, T. G. I. (2000). A variant of Newton's method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87-93. [Google Scholor]
- Daftardar-Gejji, V., & Jafari, H. (2006). An iterative method for solving nonlinear functional equations. Journal of Mathematical Analysis and Applications, 316(2), 753-763. [Google Scholor]
- Saqib, M., Iqbal, M., Ali, S., & Ismaeel, T. (2015). New Fourth and Fifth-Order Iterative Methods for Solving Nonlinear Equations. Applied Mathematics, 6(08), 1220--1227. [Google Scholor]
- Frontini, M., & Sormani, E. (2003). Modified Newton's method with third-order convergence and multiple roots. Journal of computational and applied mathematics, 156(2), 345-354.[Google Scholor]
- Hasanov, V. I., Ivanov, I. G., & Nedzhibov, G. (2002). A new modification of Newton’s method. Appl. Math. Eng., 27, 278-286.[Google Scholor]