Open Journal of Mathematical Sciences
ISSN: 2523-0212 (Online) 2616-4906 (Print)
DOI: 10.30538/oms2019.0053
Modified Abbasbandy’s method free from second derivative for solving nonlinear equations
Sahar Saba, Amir Naseem\(^1\), Muhammad Irfan Saleem
Barani Institute of Sciences, Sahiwal, Pakistan.; (S.S)
Department of Mathematics, University of Management and Technology, Lahore 54000, Pakistan.; (A.N)
Department of Mathematics, Lahore Leads University, Lahore 54000, Pakistan.; (M.I.S)
\(^{1}\)Corresponding Author: amir14514573@yahoo.com
Abstract
Keywords:
1. Introduction
One of the complex problem in science and specially in mathematics is to solve the non-linear equation2. Iterative methods
Let \(f:X\rightarrow {R},\) \(X\subset{R}\) is a scalar function, then by using Taylor expansion, expanding \(f(x)\) about the point \(x_{k}\), we obtain the Abbasbandy's method as $$ x_{k+1}=x_{k}-\frac{f(x_{k})}{f^{\prime }(x_{k})}-\frac{f^{2}(x_{k})f^{% \prime \prime }(x_{k})}{2f^{\prime 3}(x_{k})}-\frac{f^{3}(x_{k})f^{\prime \prime \prime }(x_{k})}{6f^{\prime 4}(x_{k})}. $$Algorithm 1. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following three step iterative scheme: \begin{eqnarray*}y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},n=0,1,2,..., \\ w_{n}&=&y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})} \notag\\ x_{n+1}&=&w_{n}-\frac{f(w_{n})}{f^{\prime }(w_{n})}-\frac{f^{2}(w_{n})f^{% \prime \prime }(w_{n})}{2f^{\prime 3}(w_{n})}-\frac{f^{3}(w_{n})f^{\prime \prime \prime }(w_{n})}{6f^{\prime 4}(w_{n})}. \end{eqnarray*}
By following the finite difference scheme, we develop the following algorithms:Algorithm 2. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*}y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},n=0,1,2,..., \notag\\ w_{n}&=&y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})} \notag\\ x_{n+1}&=&w_{n}-\frac{f(w_{n})}{f^{\prime }(w_{n})}-\frac{f^{2}(w_{n})f^{% \prime \prime }(w_{n})}{2f^{\prime 3}(w_{n})}+\frac{f^{3}(w_{n}){f^{\prime }(y_{n})}[f^{\prime \prime }(w_{n})-f^{\prime \prime }(y_{n})]}{6f(y_{n})f^{\prime 4}(w_{n})}.\label{} \end{eqnarray*}
Algorithm 3. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*}y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},n=0,1,2,..., \\ w_{n}&=&y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})} \notag\\ x_{n+1}&=&w_{n}-\frac{f(w_{n})}{f^{\prime }(w_{n})}-\frac{f^{\prime }(y_{n})f^{2}(z_{n})}{2f^{\prime 3 }(w_{n})} \left[\frac{f^{\prime }(y_{n})-{f^{\prime }(w_{n})}}{{f(y_{n})}}(1-\frac{f^{\prime }(y_{n})f(w_{n})}{3f(y_{n}) f^{\prime }(w_{n})})+\frac{f^{\prime }(x_{n})f(w_{n})({f^{\prime }(x_{n})}-{f^{\prime }(y_{n})})}{3f(x_{n})f(y_{n}){f^{\prime }(w_{n})}}\right]\label{} \end{eqnarray*}
3. Convergence Analysis
In this section, we prove the convergence of our purposed iterative methods.Theorem 3.1. Suppose that \(\alpha \) is a root of the equation \(f(x)=0\). If \(f(x)\) is sufficiently smooth in the neighborhood of \(\alpha \), then the convergence order of Algorithm \(1\), Algorithm \(2\) and Algorithm \(3\) is at least twelve, twelve and ten respectively.
Proof. To prove the convergence, suppose that \(\alpha \) is a root of the equation \(f(x)=0\) and \(e_n\) be the error at nth iteration, then \(e_n=x_n-\alpha\) and by using Taylor series expansion, we have \begin{eqnarray*} f(x_n)&=&{f^{\prime }(\alpha)e_n}+\frac{1}{2!}{f^{\prime \prime }(\alpha)e_n^2}+\frac{1}{3!}{f^{\prime \prime \prime }(\alpha)e_n^3}+\frac{1}{4!}{f^{(iv) }(\alpha)e_n^4}+\frac{1}{5!}{f^{(v) }(\alpha)e_n^5}+\frac{1}{6!}{f^{(vi) }(\alpha)e_n^6}+\ldots \end{eqnarray*}
\( x_{n+1}=\alpha+(2c_2^{11}-2c_3c_2^9)e_n^{12}+O(e_n^{13}), \)\\ \( x_{n+1}=\alpha+2c_2^{11}e_n^{12}+O(e_n^{13}), \)\\ and \( x_{n+1}=\alpha-\frac{3c_3c_2^7}{2}e_n^{10}+O(e_n^{11}). \)\\ Which implies that
4. Applications
In this section we solved some nonlinear functions to illustrate the efficiency of our developed algorithms. We compare our developed methods with Newton'smethod (NM), Halley's method (HM and Abbasbanday's method (AM).Example 1. In this example we solved \(f(x)=x^{3}+4x^{2}-25\) by taking \(x_{0}=-0.8\). It can be observed from Table 1 that NM takes 35 iterations, HM takes 36 iterations, AM takes 13 iterations and our Algorithms (1), (2) and (3) takes 12, 5 and 5 iterations respectively to reach at root of \(f(x)=x^{3}+4x^{2}-25\).
Table 1. Comparison of NM, HM, AM and Algorithms (1), (2) and (3).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(35\) | \(70\) | \(1.105260e-24\) | |
HM | \(36\) | \(108\) | \(2.995246e-17\) | \(1.365230013414096845760806828980\) |
AM | \(13\) | \(52\) | \(6.423767e-20\) | |
Algorithm 1 | \(12\) | \(72\) | \(2.738493e-48\) | |
Algorithm 2 | \(5\) | \(25\) | \(2.812883e-25\) | |
Algorithm 3 | \(5\) | \(20\) | \(3.108248e-83\) |
Example 2. In this example we solved \(f(x)=\) \(x^3+x^2-2\) by taking \(x_{0}=-0.1\). It can be observed from Table 2 that NM takes 13 iterations, HM takes 17 iterations, AM takes 19 iterations and our Algorithms (1), (2) and (3) takes 5, 4 and 5 iterations respectively to reach at root of \(f(x)=\) \(x^3+x^2-2\).
Table 2. Comparison of NM, HM, AM and Algorithms (1), (2) and (3).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(13\) | \(26\) | \(2.203086e-19\) | |
HM | \(17\) | \(51\) | \(4.338982e-22\) | \(1.000000000000000000000000000000\) |
AM | \(19\) | \(76\) | \(2.239715e-27\) | |
Algorithm 1 | \(5\) | \(30\) | \(2.338056e-31\) | |
Algorithm 2 | \(4\) | \(20\) | \(5.192250e-45\) | |
Algorithm 3 | \(5\) | \(20\) | \(6.607058e-83\) |
Example 3. In this example we solved \(f(x)=\) \(e^{(x^2+7x-30)}-1\) by taking \(x_{0}=4.5\). It can be observed from Table 3 that NM takes 27 iterations, HM takes 14 iterations, AM takes 16 iterations and our Algorithms (1), (2) and (3) takes 8, 7 and 7 iterations respectively to reach at root of \(f(x)=\) \(e^{(x^2+7x-30)}-1\).
Table 3. Comparison of NM, HM, AM and Algorithms (1), (2) and (3)
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(27\) | \(54\) | \(6.454129e-23\) | |
HM | \(14\) | \(42\) | \(1.217550e-25\) | \( 3.000000000000000000000000000000\) |
AM | \(16\) | \(64\) | \(1.136732e-17\) | |
Algorithm 1 | \(8\) | \(48\) | \(1.261140e-22\) | |
Algorithm 2 | \(7\) | \(35\) | \(6.546702e-15\) | |
Algorithm 3 | \(7\) | \(28\) | \(9.047215e-71\) |
Example 4. In this example we solved \(f(x)=\) \(x^{2}-e^{x}-3x+2\) by taking \(x_{0}=3.5\). It can be observed from Table 4 that NM takes 6 iterations, HM takes 5 iterations, AM takes 5 iterations and our Algorithms (1), (2) and (3) takes 2, 3 and 3 iterations respectively to reach at root of \(f(x)=\) \(x^{2}-e^{x}-3x+2\).
Table 4. Comparison of NM, HM, AM and Algorithms (1), (2) and (3)
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(6\) | \(12\) | \(4.925534e-15\) | |
HM | \(5\) | \(15\) | \(1.463064e-40\) | \(0.257530285439860760455367304937\) |
AM | \(5\) | \(20\) | \(1.120893e-28\) | |
Algorithm 1 | \(2\) | \(12\) | \(8.978612e-19\) | |
Algorithm 2 | \(3\) | \(15\) | \(0.000000e +00\) | |
Algorithm 3 | \(3\) | \(12\) | \(4.980111e-66\) |
Example 5. In this example we solved \(f(x)=\) \(xe^{x^2}-sin^{2}{x}+3cos{x}+5\) by taking \(x_{0}=1.1\). It can be observed from Table 5 that NM takes 45 iterations, HM takes 44 iterations, AM takes 50 iterations and our Algorithms (1), (2) and (3) takes 14, 12 and 12 iterations respectively to reach at root of \(f(x)=\) \(xe^{x^2}-sin^{2}{x}+3cos{x}+5\).
Table 5. Comparison of NM, HM, AM and Algorithms (1), (2) and (3)
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | 45 | 90 | \(1.268546e-15\) | |
HM | 44 | 132 | \(1.169824e-26\) | \( -1.207647827130918927009416758360 \) |
AM | \(50\) | \(200\) | \(2.868208e-29\) | |
Algorithm 1 | \(14\) | \(84\) | \(1.935782e-64\) | |
Algorithm 2 | \(12\) | \(60\) | \(4.515078e-97\) | |
Algorithm 3 | \(12\) | \(48\) | \(4.515078e-97\) |
Example 6. In this example we solved \(f(x)=\) \(x^{2}+sin(\frac{x}{5})-\frac{1}{4}\) by taking \(x_{0}=2.2\). It can be observed from Table 6 that NM takes 7 iterations, HM takes 5 iterations, AM takes 7 iterations and our Algorithms (1), (2) and (3) takes 2, 2 and 2 iterations respectively to reach at root of \(f(x)=\) \(x^{2}+sin(\frac{x}{5})-\frac{1}{4}\).
Table 6. Comparison of NM, HM, AM and Algorithms (1), (2) and (3)
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(7\) | \(14\) | \(7.777907e-23\) | |
HM | \(5\) | \(15\) | \(1.210132e-42\) | \(0.409992017989137131621258376499\) |
AM | \(7\) | \(28\) | \(2.132547e-32\) | |
Algorithm 1 | \(2\) | \(12\) | \(5.800844e-23\) | |
Algorithm 2 | \(2\) | \(10\) | \(5.897018e-23\) | |
Algorithm 3 | \(2\) | \(8\) | \(4.106937e-22\) |
Conclusions
Three new algorithms for solving nonlinear functions has been established. We can conclude that the efficiency indexes of algorithms (1), (2) and (3) are 1.5131, 1.6438, and 1.7783 respectively. The convergence orders of algorithms (1), (2) and (3) are twelve, twelve and ten respectively. By solving some examples, the performance of our developed algorithms is discussed. Our developed algorithms are performing well in comparison to Newton's method, Halley's method and Abbasbanday's method.Author Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.Competing Interests
The author(s) do not have any competing interests in the manuscript.References
- Daftardar-Gejji, V., & Jafari, H. (2006). An iterative method for solving nonlinear functional equations. Journal of Mathematical Analysis and Applications, 316(2), 753-763. [Google Scholor]
- Abdou, M. A., & Soliman, A. A. (2005). Variational iteration method for solving Burger's and coupled Burger's equations. Journal of Computational and Applied Mathematics, 181(2), 245-251. [Google Scholor]
- Amat, S., Busquier, S., & Gutiérrez, J. M. (2003). Geometric constructions of iterative functions to solve nonlinear equations. Journal of Computational and Applied Mathematics, 157(1), 197-205. [Google Scholor]
- Dembo, R. S., Eisenstat, S. C., & Steihaug, T. (1982). Inexact newton methods. SIAM Journal on Numerical analysis, 19(2), 400-408. [Google Scholor]
- Scavo, T. R., & Thoo, J. B. (1995). On the geometry of Halley's method. The American Mathematical Monthly, 102(5), 417-426. [Google Scholor]
- Noor, K. I., Noor, M. A., & Momani, S. (2007). Modified Householder iterative method for nonlinear equations. Applied mathematics and computation, 190(2), 1534-1539. [Google Scholor]
- Abbasbandy, S. (2003). Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 145(2-3), 887-893.[Google Scholor]