Open Journal of Mathematical Analysis
ISSN: 2616-8111 (Online) 2616-8103 (Print)
DOI: 10.30538/psrp-oma2018.0013
New iterative methods using variational iteration technique and their dynamical behavior
Muhammad Nawaz\(^1\), Amir Naseem, Waqas Nazeer
Department of Mathematics, Govt. Post graduate College Sahiwal Pakistan.; (M.N)
Department of Mathematics, Lahore Leeds University Lahore Pakistan.; (A.N)
Division of Science and Technology, University of Education, Lahore, 54000, Pakistan.; (W.N)
\(^{1}\)Corresponding Author; mathvision204@gmail.com
Abstract
Keywords:
1. Introduction
One of the most important problems is to find the values of \(x\) which satisfy the equation $$ f(x)=0.$$ The solution of these problems has many applications in applied sciences. In order to solve these problems, various numerical methods have been developed using different techniques such as adomian decomposition , Taylor's series, perturbation method, quadrature formulas and variational iteration technique [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,20] and the references therein. One of the most famous and oldest method for solving non linear equations is classical Newton's method which can be written as:2. Construction of iterative methods using variational technique
In this section, we develop some new sixth order iterative methods for solving non linear equations. By using variational iteration technique, we develop the main recurrence relation from which we derive the new iterative methods for solving non linear equations by considering some special cases of the auxiliary functions \(g\). These are multi-step methods consisting of predictor and corrector steps. The convergence of our methods is better than the one-step methods. Now consider the non-linear equation of the formcase 1. Let \(g(x_n)=\exp(\beta x_n^{2})\), then \(g^{\prime}(x_n)= 2\beta x_{n} g(x_n)\). Using these values in (12), we obtain the following algorithm. Using these values in (12), we obtain the following algorithm.
Algorithm 2.1. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=& x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,..., \\ x_{n+1}&=& y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+2\beta x_n f(y_n)]} \end{eqnarray*}
case 2. Let \(g(x_n)=\exp(-\beta f(x_n))\), then \(g^{\prime}(x_n)= -\beta f^{\prime }(x_{n})g(x_n)\). Using these values in (12), we obtain the following algorithm.
Algorithm 2.2. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)-\beta f(y_n)f^{\prime }(x_n)]}. \end{eqnarray*}
case 3. Let \(g(x_n)=\exp(-\beta x_n)\), then \(g^{\prime}(x_n)= -\beta g(x_n)\). Using these values in (12), we obtain the following algorithm.
Algorithm 2.3. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=& x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,..., \\ x_{n+1}&=& y_n-\frac{f(y_n)}{[f^{\prime }(y_n)-\beta f(y_n)]}. \end{eqnarray*}
By taking different values of \(\beta\), we can obtain different iterative methods. To obtain best results in all above algorithms, don't choose such values of \(\beta\) that make the denominator zero and smallest in magnitude.3. Convergence Analysis
In this section, we discuss the convergence order of the main and general iteration scheme (12).Theorem 3.1. Suppose that \(\alpha \) is a root of the equation \(f(x)=0\). If \(f(x)\) is sufficiently smooth in the neighborhood of \(\alpha \), then the convergence order of the main and general iteration scheme, described in relation (12) is at least six.
Proof. To analysis the convergence of the main and general iteration scheme, described in relation (12), suppose that \(\alpha \) is a root of the equation \(f(x)=0\) and \(e_n\) be the error at nth iteration, then \(e_n=x_n-\alpha\) and by using Taylor series expansion, we have \begin{eqnarray*} f(x)&=&{f^{\prime }(\alpha)e_n}+\frac{1}{2!}{f^{\prime \prime }(\alpha)e_n^2}+\frac{1}{3!}{f^{\prime \prime \prime }(\alpha)e_n^3}+\frac{1}{4!}{f^{(iv) }(\alpha)e_n^4}+\frac{1}{5!}{f^{(v) }(\alpha)e_n^5}\\ &+&\frac{1}{6!}{f^{(vi) }(\alpha)e_n^6}+O(e_n^{13}), \end{eqnarray*}
4. Applications
In this section we included some nonlinear functions to illustrate the efficiency of our developed algorithms for \(\beta = 1\). We compare our developed algorithms with Newton's method (NM)[12] , Ostrowski method (OM) [7], Traub's method (TM)[12], and modified Halley's method (MHM) [15]. We used \(\varepsilon =10^{-15}\). The following stopping criteria is used for computer programs:- \(|x_{n+1}-x_{n+1}|< \varepsilon.\)
- \(|f(x_{n+1})|< \varepsilon.\)
Example 4.1. \(f=x^{10}-1\)
Table 1.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(17\) | \(34\) | \(0.7\) | \(1.020203e-25\) | |
OM | \(6\) | \(18\) | \(0.7\) | \(3.610418e-20\) | |
TM | \(9\) | \(27\) | \(0.7\) | \(2.903022e-44\) | |
MHM | \(5\) | \(15\) | \(0.7\) | \(7.301483e-68\) | \(1.0000000000000000000000000\) |
Algorithem 2.1 | \(5\) | \(15\) | \(0.7\) | \(5.714130e-50\) | |
Algorithem 2.2 | \(4\) | \(12\) | \(0.7\) | \(3.873651e-18\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(0.7\) | \(1.569018e-18\) |
Example 4.2. \(f_{2}=(x-1)^3-1\)
Table 2.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(16\) | \(32\) | \(-0.5\) | \(3.692382e-21\) | |
OM | \(9\) | \(27\) | \(-0.5\) | \(3.319738e-43\) | |
TM | \(8\) | \(24\) | \(-0.5\) | \(3.692382e-21\) | |
MHM | \(7\) | \(21\) | \(-0.5\) | \(2.178093e-15\) | \(2.000000000000000000000000000000\) |
Algorithem 2.1 | \(5\) | \(15\) | \(-0.5\) | \(1.647894e-50\) | |
Algorithem 2.2 | \(4\) | \(12\) | \(-0.5\) | \(1.279762e-65\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(-0.5\) | \(2.042477e-19\) |
Example 4.3. \(f_{3}=xe^{x^2}-\sin^{2}(x)+3\cos(x)+5\)
Table 3.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(11\) | \(22\) | \(-2.5\) | \(5.818711e-27\) | |
OM | \(5\) | \(15\) | \(-2.5\) | \(5.818711e-27\) | |
TM | \(6\) | \(18\) | \(-2.5\) | \(2.504418e-54\) | |
MHM | \(11\) | \(33\) | \(-2.5\) | \(1.111535e-38\) | \(-1.207647827130918927009416758360 \) |
Algorithem 2.1 | \(4\) | \(12\) | \(-2.5\) | \(4.908730e-22\) | |
Algorithem 2.2 | \(5\) | \(15\) | \(-2.5\) | \(9.011081e-41\) | |
Algorithem 2.3 | \(4\) | \(12\) | \(-2.5\) | \(4.225070e-35\) |
Example 4.4. \(f_{4}=\sin^{2}(x)-x^{2}+1\)
Table 4.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(15\) | \(30\) | \(0.1\) | \(3.032691e-22\) | |
OM | \(8\) | \(24\) | \(0.1\) | \(2.481593e-57\) | |
TM | \(8\) | \(24\) | \(0.1\) | \(2.903022e-44\) | |
MHM | \(7\) | \(21\) | \(0.1\) | \(7.771503e-47\) | \(1.404491648215341226035086817790 \) |
Algorithem 2.1 | \(4\) | \(12\) | \(0.1\) | \(3.145402e-73\) | |
Algorithem 2.2 | \(6\) | \(18\) | \(0.1\) | \(1.208844e-45\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(0.1\) | \(1.401860e-29\) |
Example 4.5. \(f_{5}=e^{(x^2+7x-30)}-1\)
Table 5.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(16\) | \(32\) | \(2.8\) | \(1.277036e-16\) | |
OM | \(5\) | \(15\) | \(2.8\) | \(3.837830e-24\) | |
TM | \(8\) | \(24\) | \(2.8\) | \(1.277036e-16\) | |
MHM | \(5\) | \(15\) | \(2.8\) | \(8.373329e-70\) | \(3.000000000000000000000000000000 \) |
Algorithem 2.1 | \(4\) | \(16\) | \(2.8\) | \(4.131496e-34\) | |
Algorithem 2.2 | \(3\) | \(9\) | \(2.8\) | \(9.157220e-32\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(2.8\) | \(4.858181e-34\) |
Example 4.6. \(f_{6}=xe^{x}-1\)
Table 6.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(5\) | \(10\) | \(1\) | \(8.478184e-17\) | |
OM | \(3\) | \(9\) | \(1\) | \(8.984315e-40\) | |
TM | \(3\) | \(9\) | \(1\) | \(2.130596e-33\) | |
MHM | \(3\) | \(9\) | \(1\) | \(1.116440e-68\) | \(0.567143290409783872999968662210\) |
Algorithem 2.1 | \(2\) | \(6\) | \(1\) | \(2.910938e-19\) | |
Algorithem 2.2 | \(2\) | \(6\) | \(1\) | \(1.292777e-17\) | |
Algorithem 2.3 | \(2\) | \(6\) | \(1\) | \(2.468437e-27\) |
Example 4.7. \(f_{7}=x^{3}+4x^{2}-15\)
Table 7.Comparison of various iterative methods
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(38\) | \(76\) | \(-0.3\) | \(1.688878e-22\) | |
OM | \(6\) | \(18\) | \(-0.3\) | \(1.173790e-16\) | |
TM | \(19\) | \(57\) | \(-0.3\) | \(1.688878e-22\) | |
MHM | \(16\) | \(48\) | \(-0.3\) | \(3.742527e-16\) | \(1.631980805566063517522106445540 \) |
Algorithem 2.1 | \(4\) | \(12\) | \(-0.3\) | \(1.663804e-29\) | |
Algorithem 2.2 | \(6\) | \(18\) | \(-0.3\) | \(1.744284e-38\) | |
Algorithem 2.3 | \(4\) | \(12\) | \(-0.3\) | \(1.639561e-72\) |
Table 2. Shows the numerical comparisons of Newton's method, Ostrowski method, Traub's method, modified Halley's method and our developed methods. The columns represent the number of iterations \(N\) and the number of functions or derivatives evaluations \(N_{f}\) required to meet the stopping criteria, and the magnitude \(|f(x)|\) of \(f(x)\) at the final estimate \(x_{n}.\)
5. Conclusions
We have established three new sixth order methods for solving non linear functions. Our developed methods have efficiency index \(6^{\frac{1}{3}}\approx1.8171\). We compared our methods with other well known iterative methods and comparison tables (1-7) show that our methods are quite fast and efficient from other similar methods.ConclusionsCompeting Interests
The author do not have any competing interests in the manuscript.References
- Nazeer, W., Naseem, A., Kang, S. M., & Kwun, Y. C. (2016). Generalized Newton Raphson's method free from second derivative. J. Nonlinear Sci. Appl. 9 (2016), 2823, 2831. [Google Scholor]
- Nazeer, W., Tanveer, M., Kang, S. M., & Naseem, A. (2016). A new Householder's method free from second derivatives for solving nonlinear equations and polynomiography. J. Nonlinear Sci. Appl, 9, 998-1007.[Google Scholor]
- Chun, C. (2006). Construction of Newton-like iteration methods for solving nonlinear equations. Numerische Mathematik , 104(3), 297-315.[Google Scholor]
- Burden, R. L., & Faires, J. D. (2010). Numerical analysis. Cengage Learning, 9 .[Google Scholor]
- Stoer, J., & Bulirsch, R. (2013). Introduction to numerical analysis (Vol. 12). Springer Science & Business Media. [Google Scholor]
- Quarteroni, A., Sacco, R., & Saleri, F. (2010). Numerical mathematics (Vol. 37). Springer Science & Business Media. [Google Scholor]
- Chen, D., Argyros, I. K., & Qian, Q. S. (1993). A note on the Halley method in Banach spaces. Applied Mathematics and Computation, 58(2-3), 215-224.[Google Scholor]
- Frontini, M., & Sormani, E. (2003). Some variant of Newton’s method with third-order convergence. Applied Mathematics and Computation, 140(2-3), 419-426.[Google Scholor]
- Gutiérrez, J. M., & Hernandez, M. A. (1997). A family of Chebyshev-Halley type methods in Banach spaces. Bulletin of the Australian Mathematical Society, 55(1), 113-130.[Google Scholor]
- Householder, A. S. (1970). The numerical treatment of a single nonlinear equation.[Google Scholor]
- Sebah, P., & Gourdon, X. (2001). Newton’s method and high order iterations. Numbers Comput., 1-10.[Google Scholor]
- Traub, J. F. (1964). Iterative Methods for the Solution of Equations. Prentice–Hall, Englewood Cliffs, (NJ).[Google Scholor]
- Inokuti, M., Sekine, H., & Mura, T. (1978). General use of the Lagrange multiplier in nonlinear mathematical physics. Variational method in the mechanics of solids, 33(5), 156-162.[Google Scholor]
- He, J. H. (2007). Variational iteration method—some recent results and new interpretations. Journal of computational and applied mathematics, 207(1), 3-17.[Google Scholor]
- Noor, K. I., & Noor, M. A. (2007). Predictor–corrector Halley method for nonlinear equations. Applied Mathematics and Computation , 188(2), 1587-1591.[Google Scholor]
- He, J. H. (1999). Variational iteration method–a kind of non-linear analytical technique: some examples. International journal of non-linear mechanics , 34(4), 699-708.[Google Scholor]
- Noor, M. A. (2007). New classes of iterative methods for nonlinear equations. Applied Mathematics and Computation, 191(1), 128-131.[Google Scholor]
- Noor, M. A., & Shah, F. A. (2009). Variational iteration technique for solving nonlinear equations. Journal of Applied Mathematics and Computing, 31(1-2), 247-254.[Google Scholor]
- Kou, J. (2007). The improvements of modified Newton’s method. Applied Mathematics and Computation , 189(1), 602-609.[Google Scholor]
- Abbasbandy, S. (2003). Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 145(2-3), 887-893. [Google Scholor]