Open Journal of Mathematical Sciences
ISSN: 2523-0212 (Online) 2616-4906 (Print)
DOI: 10.30538/oms2021.0166
Local convergence for a family of sixth order methods with parameters
Christopher I. Argyros, Michael Argyros, Ioannis K. Argyros\(^1\), Santhosh George
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA.; (C.I.A)
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA.; (M.A)
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA.; (I.K.A)
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India-575 025.; (S.G)
\(^{1}\)Corresponding Author: iargyros@cameron.edu
Abstract
Keywords:
1. Introduction
Consider the problem of solving equation
In this paper we study the local convergence of a family of sixth order iterative methods using assumptions only on the first derivative of \(F.\) Usually the convergence order is obtained using Taylor expansions and conditions on high order derivatives not appearing on the methods [1,2,3,4,5,6,7,8,9,10,11,12,13]. These conditions limit the applicability of the methods.
For example, let \( X=Y=\mathbb{R}, \,D= [-\frac{1}{2}, \frac{3}{2}].\) Define \(f\) on \(D\) by
\[f(s)=\left\{\begin{array}{cc} s^3\log s^2+s^5-s^4& if\,\,s\neq0\\ 0& if\,\, s=0. \end{array}\right. \] Then, we have \(x_*=1,\) and \[f'(s)= 3s^2\log s^2 + 5s^4- 4s^3+ 2s^2 ,\] \[f''(s)= 6x\log s^2 + 20s^3 -12s^2 + 10s,\] \[f'''(s) = 6\log s^2 + 60s^2-24s + 22.\] Obviously \(f'''(s)\) is not bounded on \(D.\) So, the convergence of these methods is not guaranteed by the analysis in these papers.The family of methods we are interested are:
The efficiency and convergence order was given in [14] when \(X=Y=\mathbb{R}^k.\) The convergence was shown using the seventh derivative. We include error bounds on \(\|x_n-x_*\|\) and uniqueness results not given in [14]. Our technique is so general that it can be used to extend the usage of other methods [1,2,3,4,5,6,7,8,9,10,11,12,13].
The article contains local convergence analysis in Section 2 and the numerical examples in Section 3.
2. Local convergence
We develop some real parameters and functions. Set \(S=[0, \infty).\) Suppose function:- (i) \( \omega_0(t)-1 \) has a least zero \(R_0\in S-\{0\}\) for some function \(\omega:S\longrightarrow S\) continuous and nondecreasing. Set \(S_0=[0, R_0).\)
- (ii) \( \varphi_{1}(t)-1=0 \) has a least zero \(r_1\in S_0-\{0\}\) for some functions \(\omega:S_0\longrightarrow S, \omega_1:S_0\longrightarrow S\) continuous and nondecreasing with \(\varphi_1:S_0\longrightarrow S\) defined by \[\varphi_1(t)=\frac{\int_0^1\omega((1-\theta)t)d\theta+|1-\gamma|\int_0^1\omega_1(\theta t)d\theta}{1-\omega_0(t)}.\]
- (iii) \(\varphi_2(t)-1\) has a least zero \(r_2\in S_0-\{0\}\) for some function \(\zeta:S_0\longrightarrow S\) with \(\varphi_2:S_0\longrightarrow S\) defined by \[\varphi_{2}(t)=\frac{\int_0^1\omega((1-\theta)t)d\theta+\zeta(t)\int_0^1\varphi_1(\theta t)d\theta}{1-\omega_0(t)},\] where \(\zeta(t)=|a_1-1|+\frac{\omega_1(t)}{1-\omega_0(\varphi_1(t)t)}+\frac{|a_3|\omega_0(\varphi_{1}(t)t)}{1-\omega_0(t)}+|a_4|\left(\frac{\omega_1(t)}{1-\omega_0(\varphi_1(t)t)}\right)^2.\)
- (iv) \(\omega_0(\varphi_1(t)t)-1\) has a least zero \(R_1\in S_0-\{0\}.\) Set \(R=\min\{R_0, R_1\}\) and \(S_1=[0, R).\)
- (v) \(\varphi_3(t)-1\) has a least zero \(r_3\in S_1-\{0\}\) for some function \(\psi:S_1\longrightarrow S\) defined by \begin{align*} \varphi_3(t)=&\left[\frac{\int_0^1\omega((1-\theta)\varphi_2(t)t)}{1-\omega_0(\varphi_2(t)t)}\right.+\left.\frac{(\omega_0(\varphi_{2}(t)t)+\omega_0(\varphi_1(t)t))\int_0^1\omega_1(\theta \varphi_2(t)t)d\theta}{(1-\omega_0(\varphi_2(t)t))(1-\omega_0(\varphi_1(t)t))}\right.\\&+\left.\frac{\psi(t)\int_0^1\omega_1(\theta\varphi_2(t)t)d\theta}{1-\omega_0(\varphi_2(t)t)}\right]\varphi_2(t) \end{align*} where \(\psi(t)=|b_1-1|+|b_2|\frac{\omega_1(\varphi_1(t)t)}{1-\omega_0(t)}+|b_3|\frac{\omega_1(t)}{1-\omega_0(\varphi_1(t)t)}+|b_4|\left(\frac{\omega_1(\varphi_1(t)t)}{1-\omega_0(t)}\right)^2.\)
Define parameter \(r\) by
Our local convergence analysis uses hypotheses (H) provided that the functions ``\(\omega\)`` are as previously given, and \(x_*\) is a simple zero of \(F.\) Suppose:
- (H1) \(\|F'(x_*)^{-1}(F'(u)-F'(x_*))\|\leq \omega_0(\|u-x_*\|)\) for each \(u\in \Omega.\) Set \(\Omega_0=\Omega\cap T(x_*,R_0)\);
- (H2) \(\|F'(x_*)^{-1}(F'(u)-F'(v))\|\leq \omega(\|u-v\|)\) and \(\|F'(x_*)^{-1}F'(u)\|\leq \omega_1(\|u-x_*\|)\) for each \(u,v\in \Omega_0\);
- (H3) \(\bar{T}(x_*,r)\subset \Omega;\) and
- (H4) There exists \(\beta\geq r\) satisfying \(\int_0^1\omega_0(\theta \beta)d\theta < 1.\) Set \(\Omega_1=\Omega\cap \bar{T}(x_*,\beta).\)
Theorem 1. Under hypotheses (H) choose starting point \(x_0\in T(x_*,r)-\{x_*\}.\) Then, sequence \(\{x_n\}\) generated by method (2) for any starting point \(x_0\) is well defined in \(T(x_*,r),\) remains in \(T(x_*,r)\) and \(\lim_{n\longrightarrow \infty}x_n=x_*,\) which is the only zero of \(F\) in the set \(\Omega_1\) given in (H4).
Proof. The following assertions shall be shown using induction
Set \(M=\int_0^1F'(x_*+\theta(q-x_*))d\theta\) for some \(q\in \Omega_1\) with \(F(q)=0.\) Using (H1) and (H4) \[\|F'(x_*)^{-1}(M-F'(x_*))\|\leq \int_0^1\omega_0(\theta\|q-x_*)\|d\theta \leq \int_0^1\omega_0(\theta \beta)d\theta < 1,\] so \(q=x_*\) is implied by the identity \(0=F(q)-F(x_*)=M(q-x_*)\) and the invertability of \(M.\)
Remark 1.
- 1. In view of (H2) and the estimate \begin{eqnarray*} \|F'(x^\ast)^{-1}F'(x)\|&=&\|F'(x^\ast)^{-1}(F'(x)-F'(x^\ast))+I\|\\ &\leq& 1+\|F'(x^\ast)^{-1}(F'(x)-F'(x^\ast))\| \leq 1+\varphi_0(\|x-x^\ast\|) \end{eqnarray*} the second condition in (H3) can be dropped and \(\varphi_1\) can be replaced by \(\varphi_1(t)=1+\varphi_0(t)\) or \(\varphi_1(t)=1+\varphi_0(R_0),\) since \(t\in [0, R_0).\)
- 2. The results obtained here can be used for operators \(F\) satisfying autonomous differential equations [15] of the form \(F'(x)=P(F(x))\) where \(P\) is a continuous operator. Then, since \(F'(x^\ast)=P(F(x^\ast))=P(0),\) we can apply the results without actually knowing \(x^\ast.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)
- 3. Let \(\varphi_0(t)=L_0t,\) and \(\varphi(t)=Lt.\) In [15,16] we showed that \(r_A=\frac{2}{2L_0+L}\) is the
convergence radius of Newton's method:
\begin{equation} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n)\,\,\,\  for \  each \  \,\,\,n=0,1,2,\cdots \end{equation}(21)\begin{equation} r_R=\frac{2}{3L},\end{equation}(22)
- 4. We can compute the computational order of convergence (COC) defined by \(\xi= \frac{\ln\left(\frac{d_{n+1}}{d_n}\right)}{\ln\left(\frac{d_n}{d_{n-1}}\right)}, \) where \(d_n=\|x_n-x^\ast\|\) or the approximate computational order of convergence \(\xi_1= \frac{\ln\left(\frac{e_{n+1}}{e_n}\right)}{\ln\left(\frac{e_n}{e_{n-1}}\right)}, \) where \(e_n=\|x_n-x_{n-1}\|.\)
3. Numerical Examples
Example 1. Consider the kinematic system \[F_1'(x)=e^x,\, F_2'(y)=(e-1)y+1,\, F_3'(z)=1\] with \(F_1(0)=F_2(0)=F_3(0)=0.\) Let \(F=(F_1,F_2,F_3).\) Let \({B}_1={B}_2=\mathbb{R}^3, D=\bar{B}(0,1), p=(0, 0, 0)^t.\) Define function \(F\) on \(D\) for \(w=(x,y, z)^t\) by \[ F(w)=(e^x-1, \frac{e-1}{2}y^2+y, z)^t. \] Then, we get \[F'(v)=\left[ \begin{array}{ccc} e^x&0&0\\ 0&(e-1)y+1&0\\ 0&0&1 \end{array}\right], \] so \( \omega_0(t)=(e-1)t, \omega(t)=e^{\frac{1}{e-1}}t, \omega_1(t)=e^{\frac{1}{e-1}}.\) Then, the radii are \[r_{1}=0.154407,\, r_2=0.367385,\, r_3=0.323842.\]
Example 2. Consider \({B}_1={B}_2=C[0,1],\) \(D=\overline{B}(0,1)\) and \(F:D\longrightarrow B_2\) defined by
Example 3. By the academic example of the introduction, we have \(\omega_0(t)=\omega(t)=96.6629073 t\) and \(\omega_1(t) =2.\) Then, the radii are \[r_{1}=0.00229894,\, r_2=0.0065021,\, r_3=0.0905654.\]
Author Contributions
All authors contributed equally.Conflicts of Interest
The authors declare no conflict of interest.References
- Amat, S., Busquier, S., Grau, Á., & Grau-Sánchez, M. (2013). Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Applied Mathematics and Computation, 219(15), 7954-7963. [Google Scholor]
- Cordero, A., Torregrosa, J. R., & Vassileva, M. P. (2013). Increasing the order of convergence of iterative schemes for solving nonlinear systems. Journal of Computational and Applied Mathematics, 252, 86-94. [Google Scholor]
- Cordero, A., Martínez, E., & Torregrosa, J. R. (2009). Iterative methods of order four and five for systems of nonlinear equations. Journal of Computational and Applied Mathematics, 231(2), 541-551. [Google Scholor]
- Cordero, A., Hueso, J. L., Martínez, E., & Torregrosa, J. R. (2012). Increasing the convergence order of an iterative method for nonlinear systems. Applied Mathematics Letters, 25(12), 2369-2374. [Google Scholor]
- Chicharro, F., Cordero, A., Gutiérrez, J. M., & Torregrosa, J. R. (2013). Complex dynamics of derivative-free methods for nonlinear equations. Applied Mathematics and Computation, 219(12), 7023-7035. [Google Scholor]
- Darvishi, M. T., & Barati, A. (2007). A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Applied Mathematics and Computation, 188(1), 257-261. [Google Scholor]
- Grau-Sánchez, M., Grau, Á., & Noguera, M. (2011). On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. Journal of Computational and Applied Mathematics, 236(6), 1259-1266. [Google Scholor]
- Gutiérrez, J. M., Hernández, M. A., & Romero, N. (2010). Dynamics of a new family of iterative processes for quadratic polynomials. Journal of Computational and Applied Mathematics, 233(10), 2688-2695. [Google Scholor]
- Neta, B., & Petkovic, M. S. (2010). Construction of optimal order nonlinear solvers using inverse interpolation. Applied Mathematics and Computation, 217(6), 2448-2455. [Google Scholor]
- Rheinboldt, W. C. (1975). An Adaptive Continuation Process for Solving Systems of Nonlinear Equations. University of Maryland. [Google Scholor]
- Sharma, J. R., Guha, R. K., & Sharma, R. (2013). An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numerical Algorithms, 62(2), 307-323. [Google Scholor]
- Traub, J. F. (1982). Iterative Methods for the Solution of Equations (Vol. 312). American Mathematical Soc.. [Google Scholor]
- Wang, X., Kou, J., & Li, Y. (2009). Modified Jarratt method with sixth-order convergence. Applied Mathematics Letters, 22(12), 1798-1802. [Google Scholor]
- Hueso, J. L., Martínez, E., & Teruel, C. (2015). Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. Journal of Computational and Applied Mathematics, 275, 412-420. [Google Scholor]
- Argyros, I. K. (2007). Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C.K. and Wuytack L. Elsevier Publ. Company, New York. [Google Scholor]
- Argyros, I. K., & Magreñán, A. A. (2017). Iterative Method and their Dynamics with Applications. CRC Press, New York, USA. [Google Scholor]
- Argyros, I. K., & George, S. (2019). Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications. Volume-III, Nova Publishes, NY. [Google Scholor]
- Argyros, I. K., & George, S. (2019). Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications. Volume-IV, Nova Publishes, NY. [Google Scholor]
- Argyros, I. K., George, S., & Magrenan, A. A. (2015). Local convergence for multi-point-parametric Chebyshev–Halley-type methods of high convergence order. Journal of Computational and Applied Mathematics, 282, 215-224. [Google Scholor]