Open Journal of Mathematical Analysis
ISSN: 2616-8111 (Online) 2616-8103 (Print)
DOI: 10.30538/psrp-oma2018.0007
A new third-order iteration method for solving nonlinear equations
Muhammad Saqib, Zain Majeed, Muhammad Quraish, Waqas Nazeer\(^1\)
Department of Mathematics Govt. Degree College Kharian Pakistan.; (M.S)
Department of Mathematics and Statistics, The University of Lahore, Lahore 54000, Pakistan.; (Z.M)
Department of Mathematics, The University of Lahore (Pakpattan Campus) Lahore, Pakistan.; (M.Q)
Division of Science and Technology, University of Education, Lahore, 54000, Pakistan.; (W.N)
\(^{1}\)Corresponding Author; nazeer.waqas@ue.edu.pk
Abstract
Keywords:
1. Introduction
Solving equations in one variable is the most discussed problem in numerical analysis. There are several numerical techniques for solving nonlinear equations (\text{see for example } [1, 2, 3, 4, 5, 6, 7, 8] and the references there in). For a given function \(f\), we have to find at least one solution to the equation \(f(x)=0\). Note that, priory, we do not put any restrictions on the function \(f\). In order to check whether a given solution is true or not, we need to be able to evaluate the function, that is, \(f(\alpha) = 0\). In reality, the mere ability to be able to evaluate the function does not suffice. We need to assume some kind of "good behavior". The more we assume, the more potential we have to develop fast algorithms for finding the root. At the same time, more assumptions will reduce the number of function classes which will our assumptions. This is a fundamental paradigm in numerical analysis. We present a new iteration method that approximates the root of a nonlinear equation in one variable using the value of the function and its derivative. Our method converges to the root cubically. In this study, we suggest an improvement to the iteration of Kang iteration method at the expense of one additional first derivative evaluation. It is shown that the suggested method converges to the root, and the order of convergence is at least three in a neighborhood of the root, whenever the first and higher order derivatives of the function exist in a neighborhood of the root. This means that our method approximately triples the number of significant digits after an iteration. Numerical examples support this theory, and the computational order of convergence is even more than three for certain functions. We know that fixed point iteration method [9] is the fundamental algorithm for solving nonlinear equations in one variable. In this method equation is rewritten as- \(\exists\) \([a,b]\) such that \(g(x)\in \lbrack a,b]\) for all \(x\in \lbrack a,b]\),
- \(\exists\) \([a,b]\) such that \(\left\vert g^{\prime}(x)\right\vert \leq L<1\) for all \(x\in \lbrack a,b]\).
Definition 1.1. Suppose, \(\{x_n\} \to \alpha\), with the property that, \begin{equation*} \lim\limits_{n \to \infty} \left\vert \frac{x_{n+1}-\alpha }{(x_{n}-\alpha )^{q}}\right\vert = D \end{equation*} where, \(D \in \mathbb{R}^+\) and \(q \in \mathbb{Z}\), then \(D\) is called the constant of convergence and \(q\) is called the order of convergence.
Definition 1.2. [2] Let \(g \in C^{p}[a,b]\). If \(g^{(k)}(x)=0\) for \(k = 1, 2, \ldots, p-1\) and \(g^{(p)}(x) \neq 0\), then the sequence \(\{x_{n}\}\) is of order \(p\).
Algorithm 1.3. [10] For a given \(x_{0}\), Kang et. al. gave the approximate solution \(x_{n+1}\) by an iteration scheme as follows. \begin{equation*} x_{n+1} = \frac{g\left( x_{n}\right) - x_{n} g^{\prime}(x_{n})}{1 - g^{\prime}(x_{n})} \end{equation*} where, \(g^{\prime}(x_{n}) \neq 1\). This scheme has convergence of order 2.
2. New Iteration Method
In the fixed-point iteration method, for some \(x \in \mathbb{R}\), if \(f\left( x\right) = 0\), then the nonlinear equation can be converted to, \begin{equation*} x = g(x) \end{equation*} Let \(\alpha\) be the root of \(f\left( x \right) = 0\). We can write functional equation of algorithm 1.3 as, \begin{equation*} H_{g}(x) = \frac{g\left( x\right) - x g^{\prime}(x)}{1 - g^{\prime}(x)}, \hspace{5mm} g^{\prime}(x) \neq 1 \end{equation*} or, \begin{equation*} H_{g}(x) = x - \frac{x - g(x)}{1 - g^{\prime}(x)}. \end{equation*}Algorithm 2.1
3. Convergence Analysis of Algorithm
Theorem 3.1. Let \(f : D \subset \mathbb{R} \rightarrow \mathbb{R}\) be a nonlinear function on an open interval \(D\), such that \(f\left(x\right) = 0\) (or equivalently \(x = g\left(x\right) )\), has a simple root \(\alpha \in D\). Here, \(g : D \subset \mathbb{R} \rightarrow \mathbb{R}\), is sufficiently smooth in the neighborhood of the root \(\alpha\). Then, the order of convergence of algorithm 2.1 is at least \(3\), where \(c_{k} = \frac{g^{(k)}(\alpha)}{k!(1 - g^{\prime}(\alpha))}\), \(k = 2, 3, \ldots\).
Proof. As \(g\left(\alpha\right) = \alpha\), let \(x_{n} = \alpha + e_{n}\) and \(x_{n + 1} = \alpha + e_{n+1}\). By Taylor's expansion, we have, \begin{equation*} g(x_{n}) = g\left(\alpha\right) + e_{n}g^{\prime}(\alpha) + \frac{e_{n}^{2}}{2!}g^{\prime\prime}(\alpha) + \frac{e_{n}^{3}}{3!} g^{\prime \prime \prime}(\alpha ) + O(e_{n}^{4}). \end{equation*} This implies that,
4. Comparison
\noindent Comparison of Fixed Point Method (FPM), Kang Iteration Method (KIM) and our new iteration method (NIM), is shown in the following table, root corrected up to seventeen decimal places.Example 4.1 \(f(x)=x^{3}-23x-135\), \(g(x)=23+\frac{135}{x}\)
Table 1. Comparison of FPM, NIM, KIM
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(x_{n+1}\) | \(f(x_{n+1})\) |
---|---|---|---|---|---|
FPM | \(24\) | \(24\) | \(2\) | \(2.420536e-15\) | \(27.84778272427181476 \) |
NIM | \(7\) | \(14\) | \(2\) | \(2.781849e-20\) | \(27.84778272427181484 \) |
KIM | \(4\) | \(12\) | \(2\) | \(2.647181e-16\) | \(27.84778272427181483 \) |
Example 4.2 \(f(x)=x-\cos x,\) \(g(x)=\cos x\)
Table 2. Comparison of FPM, NIM, KIM
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(x_{n+1}\) | \(f(x_{n+1})\) |
---|---|---|---|---|---|
FPM | \(91\) | \(91\) | \(6\) | \(1.376330e-16\) | \(0.73908513321516064\) |
NIM | \(10\) | \(20\) | \(6\) | \(1.083243e-29\) | \(0.73908513321516064\) |
KIM | \(5\) | \(15\) | \(6\) | \(1.809632e-36\) | \(0.73908513321516064 \) |
Example 4.3 \(f(x)=x^{3}+4x^{2}+8x+8,g(x)=-1-1/2x^{2}-1/8x3\)
Table 3. Comparison of FPM, NIM, KIM
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(x_{n+1}\) | \(f(x_{n+1})\) |
---|---|---|---|---|---|
FPM | \(50\) | \(50\) | \(-1.7\) | \(1.416263e-15\) | \(-1.99999999999999965 \) |
NIM | \(5\) | \(10\) | \(-1.7\) | \(7.836283e-27\) | \(-2.00000000000000000 \) |
KIM | \(3\) | \(9\) | \(-1.7\) | \(4.983507e-25\) | \(-2.00000000000000000 \) |
Example 4.4 \(f(x)=\ln (x-2)+x,g(x)=2+e^{-x}\)
Table 4. Comparison of FPM, NIM, KIM
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(x_{n+1}\) | \(f(x_{n+1})\) |
---|---|---|---|---|---|
FPM | \(18\) | \(18\) | \(0.1\) | \(1.163802e-15\) | \(2.12002823898764110 \) |
NIM | \(5\) | \(10\) | \(0.1\) | \(5.972968e-22\) | \(2.12002823898764123\) |
KIM | \(3\) | \(9\) | \(0.1\) | \(8.594812e-22\) | \(2.12002823898764123 \) |
Example 4.5 \(f(x)=x^{2}\sin x-\cos x,g(x)=\sqrt{\frac{1}{\tan x}}\)
Table 5. Comparison of FPM, NIM, KIM
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(x_{n+1}\) | \(f(x_{n+1})\) |
---|---|---|---|---|---|
FPM | \(417\) | \(247\) | \(2\) | \(2.668900e-16\) | \(0.89520604538423175 \) |
NIM | \(7\) | \(14\) | \(2\) | \(3.195785e-31\) | \(0.89520604538423175 \) |
KIM | \(4\) | \(12\) | \(2\) | \(1.746045e-23\) | \(0.89520604538423175 \) |
Example 4.6 \(f(x)=x^{2}-5x-16,g(x)=5+\frac{16}{x}\)
Table 2. Comparison of FPM, NIM, KIM
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(x_{n+1}\) | \(f(x_{n+1})\) |
---|---|---|---|---|---|
FPM | \(34\) | \(34\) | \(1\) | \(6.417745e-16\) | \(7.21699056602830184 \) |
NIM | \(7\) | \(14\) | \(1\) | \(2.984439e-27\) | \(7.21699056602830191 \) |
KIM | \(4\) | \(12\) | \(1\) | \(2.797394e-26\) | \(7.21699056602830191 \) |
5. Conclusions
A new third order iteration method for solving nonlinear equations has been introduced. By using some examples, performance of \(NIM\) is also discussed. Its performance is much better, in comparison to the fixed point method and the method presented in [10].ConclusionsCompeting Interests
The author do not have any competing interests in the manuscript.References
- Biazar, J., & Amirteimoori, A. (2006). An improvement to the fixed point iterative method. Applied mathematics and computation, 182(1), 567-571. [Google Scholor]
- Babolian, E., & Biazar, J. (2002). Solution of nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 132(1), 167-172.[Google Scholor]
- Kou, J. (2007). The improvements of modified Newton’s method. Applied Mathematics and Computation, 189(1), 602-609.[Google Scholor]
- Gutierrez, J. M., & Hernández, M. A. (2001). An acceleration of Newton's method: Super-Halley method. Applied Mathematics and Computation, 117(2-3), 223-239.[Google Scholor]
- Abbasbandy, S. (2003). Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 145(2-3), 887-893. [Google Scholor]
- Weerakoon, S., & Fernando, T. G. I. (2000). A variant of Newton's method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87-93.[Google Scholor]
- Homeier, H. H. H. (2003). A modified Newton method for rootfinding with cubic convergence. Journal of Computational and Applied Mathematics, 157(1), 227-230.[Google Scholor]
- Frontini, M., & Sormani, E. (2003). Some variant of Newton’s method with third-order convergence. Applied Mathematics and Computation, 140(2-3), 419-426.[Google Scholor]
- Isaacson, E., & Keller, H. B. (2012). Analysis of numerical methods. Courier Corporation.[Google Scholor]
- Kang, S. M., Rafiq, A., & Kwun, Y. C. (2013). A new second-order iteration method for solving nonlinear equations. In Abstract and Applied Analysis (Vol. 2013). Hindawi.[Google Scholor]