Open Journal of Mathematical Sciences
ISSN: 2523-0212 (Online) 2616-4906 (Print)
DOI: 10.30538/oms2021.0158
On some iterative methods with frozen derivatives for solving equations
Samundra Regmi, Christopher Argyros, Ioannis K. Argyros\(^1\), Santhosh George
Learning Commons, University of North Texas at Dallas, TX 75038, USA.; (S.R)
Department of Computing Science, University of Oklahoma, Norman, OK 73071, USA.;(C.A)
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA.; (I.K.A)
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India.; (S.G)
\(^{1}\)Corresponding Author: iargyros@cameron.edu
Abstract
Keywords:
1. Introduction
We consider solving equation
Iterative methods are used to generate a sequence converging to a solution \(x_*\) of Equation (1) under certain conditions [1,2,3,4,5,6,7,8,9,10,11,12]. Recently a surge has been noticed in the development of efficient iterative methods with frozen derivatives. The convergence order is obtained using Taylor expansions and conditions on high order derivatives not appearing on the method. These conditions limit the applicability of the methods. For example, let \( X=Y=\mathbb{R}, \,D= \left[-\frac{1}{2}, \frac{3}{2}\right].\) Define \(f\) on \(D\) by
\[f(t)=\left\{\begin{array}{cc} t^3\log t^2+t^5-t^4& \text{if}\,\,t\neq0,\\ 0& \text{if}\,\, t=0. \end{array}\right. \] Then, we have \(t_*=1,\) and \begin{align*}f'(t)&= 3t^2\log t^2 + 5t^4- 4t^3+ 2t^2 ,\\ f''(t)&= 6t\log t^2 + 20t^3 -12t^2 + 10t,\\ f'''(t) &= 6\log t^2 + 60t^2-24t + 22.\end{align*} Obviously \(f'''(t)\) is not bounded on \(D.\) So, the convergence of these methods is not guaranteed by the analysis in these papers. Moreover, no comparable error estimates are given [6,8,10,11] on the distances involved or uniqueness of the solution results. That is why we develop a general technique that can be used on iterative methods, and address these problems by using only the first derivative which only appears on these methods.We demonstrate this technique on the \(3(i-1),\, \) convergence order method defined for all \(n=0,1,2,\ldots,\,\) by
The efficiency, convergence order and comparisons with other methods using similar information was given in [6,8,10,11] when \(X=Y=\mathbb{R}^k.\) The convergence was shown using the seventh derivative. We include computable error bounds on \(\|x_n-x_*\|\) and uniqueness results that are not given in [6,8,10,11]. Our technique is so general that it can be used to extend the usage of other methods [1,2,3,4,5,6,7,8,9,10,11,12]. The method was developed in [10], where the comparisons to other methods were well stretched.
The motivation of this paper is not to do the same, but to introduce a technique that expands the applicability of this and other methods using high order derivatives not appearing in these methods. The first derivative has only been used in the convergence hypotheses. Notice that this is the only derivative appearing in the method. We also provide a computable radius of convergence which is not given in [10]. This way we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show convergence of the method. Our results significantly extends the applicability of these methods and provide a new way of looking at iterative methods. The article contains local convergence analysis in Section 2 and the numerical examples in Section 3.
2. Convergence analysis
Let \(w_0:T\longrightarrow T\) be a continuous and nondecreasing function, where \(T=[0, \infty)\) and the equationDefine functions \(g_1\) and \(\bar{g}_1\) on interval \(T_0\) by
\[g_1\left(t\right)=\frac{\int_0^1\omega\left(\left(1-\theta\right)t\right)d\theta}{1-\omega_0\left(t\right)}\] and \[\bar{g}_1(t)=g_1(t)-1.\] Suppose that the equationMoreover, define functions \(g_2\) and \(\bar{g}_2\) on \(T_0\) by
\[g_2\left(t\right)=g_1\left(t\right)+\frac{\left(\omega_0\left(g_1\left(t\right)t\right)+\omega_0\left(t\right)\right)\int_0^1\omega_1\left(\theta \left(t\right)\right)d\theta}{2\left(1-\omega_0\left(t\right)\right)\left(1-p\left(t\right)\right)}\] and \[\bar{g}_2(t)=g_2(t)-1.\] Suppose that the equationDefine functions \(h\) and \(\psi\) on \(T_2=[0,\rho_2)\) by
\[h(t)=\left(1+\frac{\omega_1(g_2(t)t)}{2(1-q(t))}\right)(\omega_0(t)+\omega_0(g_2(t)t))\] and \[\psi(t)=g_1(g_2(t)t)+\frac{h(t)}{(1-\omega_0(g_2(t)t))(1-\omega_0(t))}.\] Suppose that the equationWe shall show that \(r\) is a radius of convergence, where
- (A1) \(F:D\subset X\longrightarrow Y\) is Fréchet continuously differentiable; there exists \(x_*\in D\) such that \(F(x_*)=0\) and \(F'(x_*)^{-1}\in L(Y,X).\)
- (A2) There exists a continuous and nondecreasing function \(\omega_0:T\longrightarrow T\) such that for each \(x\in D\), \[\left\|F'\left(x_*\right)^{-1}\left(F'\left(x\right)-F'\left(x_*\right)\right)\right\|\leq \omega_0\left(\left\|x-x_*\right\|\right).\] Set \(D_0=D\cap U(x_*,\rho_0).\)
- (A3) There exist continuous and nondecreasing functions \(\omega:T_0\longrightarrow T, \omega_1:T_0\longrightarrow T\) such that for each \(x,y\in D_0\) \[\left\|F'(x_*)^{-1}(F'(y)-F'(x))\right\|\leq \omega(\|y-x\|)\] and \[\left\|F'(x_*)^{-1}F'(x)\right\|\leq \omega_1\left(\left\|x-x_*\right\|\right).\]
- (A4) \(\bar{U}(x_*,r)\subset D.\)
- (A5) There exists \(r_*\geq r\) such that \[\int_0^1\omega_0(\theta r_*)d\theta < 1.\] Set \(D_1=D\cap \bar{U}(x_*, r_*).\)
Theorem 1. Suppose the hypotheses (A) hold. Then, for any starting point \(x_0\in U(x_*,r)-\{x_*\},\) sequence \(\{x_n\}\) generated by method (2) is well defined in \(U(x_*,r),\) remains in \(U(x_*,r)\) and converges to \(x_*.\) Moreover, the following items hold for all \(i=3,4,\ldots, k, n=0,1,2,\ldots, \)
Proof. We shall use mathematical induction to show that the iterates \(\{x_n\} \) exist, remain in \(u\in U(x_*,r)\) and satisfy (16)-(19). Letting \(u\in U(x_*,r)-\{x_*\}\) and using (A1) and (A2) and (9), we get in turn
Finally, let \(x_{**}\in D_1\) with \(F(x_{**})=0.\) Setting \(Q=\int_0^1F'(x_{**}+\theta(x_*-x_{**}))d\theta\) and using (A2), (9) and (A5), we get
\[\left\|F'\left(x_*\right)^{-1}\left(Q-F'\left(x_*\right)\right)\right\|\leq \int_0^1\omega_0\left(\theta\left\|x_*-x_{**}\right)\right\|d\theta \leq \int_0^1\omega_0\left(\theta r_{**}\right)d\theta < 1,\] so \(Q^{-1}\in L(Y,X).\) Consequently, from \(0=F(x_{**})-F(x_*)=Q(x_{**}-x_*),\) we obtain \(x_{**}=x_*.\)Remark 1.
- 1. In view of (11) and the estimate \begin{align*} \left\|F'\left(x^\ast\right)^{-1}F'\left(x\right)\right\|&=\left\|F'\left(x^\ast\right)^{-1}\left(F'\left(x\right)-F'\left(x^\ast\right)\right)+I\right\|\\ &\leq 1+\left\|F'\left(x^\ast\right)^{-1}\left(F'\left(x\right)-F'\left(x^\ast\right)\right)\right\| \\ &\leq 1+L_0\left\|x-x^\ast\right\|, \end{align*} the condition (14) can be dropped and \(M\) can be replaced by \(M(t)=1+L_0 t\) or \(M(t)=M=2,\) since \(t\in [0, \frac{1}{L_0}).\)
- 2. The results obtained here can be used for operators \(F\) satisfying autonomous differential equations [2] of the form \[F'(x)=P(F(x))\] where \(P\) is a continuous operator. Then, since \(F'(x^\ast)=P(F(x^\ast))=P(0),\) we can apply the results without actually knowing \(x^\ast.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)
- 3. Let \(\omega_0(t)=L_0t,\) and \(\omega(t)=Lt.\) In [2,3] we showed that \(r_A=\frac{2}{2L_0+L}\) is the
convergence radius of Newton's method:
\begin{equation} \label{2.30} x_{n+1}=x_n-F'\left(x_n\right)^{-1}F(x_n)\, \  \  for \  each \  n=0,1,2,\cdots \end{equation}(33)\begin{equation} \label{2.31}r_R=\frac{2}{3L},\end{equation}(34)
- 4. It is worth noticing that method (2) is not changing when we use the conditions of Theorem 1 instead of the stronger conditions used in [6,8,11]. Moreover, we can compute the computational order of convergence (COC) defined by \[\xi= \frac{\ln\left(\frac{\|x_{n+1}-x^\ast\|}{\|x_n-x^\ast\|}\right)}{\ln\left(\frac{\|x_{n}-x^\ast\|}{\|x_{n-1}-x^\ast\|}\right)} \,,\] or the approximate computational order of convergence \[\xi_1= \frac{\ln\left(\frac{\|x_{n+1}-x_n\|}{\|x_n-x_{n-1}\|}\right)}{\ln\left(\frac{\|x_{n}-x_{n-1}\|}{\|x_{n-1}-x_{n-2}\|}\right)}. \] This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator \(F.\) Note also that the computation of \(\xi_1\) does not require the usage of the solution \(x^\ast.\)
3. Numerical Examples
Example 1. Consider the kinematic system \[\begin{cases} F_1'(x)=e^x,\\ F_2'(y)=(e-1)y+1,\\ F_3'(z)=1.\end{cases}\] with \(F_1(0)=F_2(0)=F_3(0)=0.\) Letting \(F=(F_1,F_2,F_3),\) \(X=Y=\mathbb{R}^3,\ \ D=\bar{U}(0,1),\ \ p=(0, 0, 0)^T,\) and defining the function \(F\) on \(D\) for \(w=(x,y, z)^T\) by \[ F(w)=\left(e^x-1, \frac{e-1}{2}y^2+y, z\right)^T, \] we get \[F'(v)=\left[ \begin{array}{ccc} e^x&0&0\\ 0&(e-1)y+1&0\\ 0&0&1 \end{array}\right], \] so \( \omega_0(t)=(e-1)t,\ \ \omega(t)=e^{\frac{1}{e-1}}t,\ \ \text{and}\ \ \omega_1(t)=e^{\frac{1}{e-1}}.\) Then, the radii are \(r_1=0.382692,\, r_2=0.196552,\) and \(r_3=0.126761.\)
Example 2. Considering \(X=Y=C[0,1],\) \(D=\overline{U}(0,1)\) and \(F:D\longrightarrow Y\) defined by
Example 3. By the academic example of the introduction, we have \(\omega_0(t)=\omega(t)=96.6629073 t\) and \(\omega_1(t) =2.\) Then, the radii are \(r_1=0.00689682,\,\ r_2=0.00338133,\) and \(r_3=0.00217133.\)
Example 4. Let \(X=Y=C[0,1],\ \ D=\bar{U}(x^*, 1)\) and consider the nonlinear integral equation of the mixed Hammerstein-type [1,2,6,7,8,9,12] defined by \[x(s)=\int_0^1G(s,t)\left(x(t)^{3/2}+\frac{x(t)^2}{2}\right)dt,\] where the kernel \(G\) is the Green's function defined on the interval \([0,1]\times [0,1]\) by \[ G(s,t)=\left\{\begin{array}{cc} (1-s)t,& \,\,\,t\leq s,\\ s(1-t),&\,\,\,s\leq t. \end{array}\right. \] The solution \(x^*(s)=0\) is the same as the solution of Equation (1), where \(F:C[0,1]\longrightarrow C[0,1])\) is defined by \[F(x)(s)=x(s)-\int_0^1G(s,t)\left(x(t)^{3/2}+\frac{x(t)^2}{2}\right)dt.\] Notice that \[\left\|\int_0^1G(s,t)dt\right\|\leq \frac{1}{8}.\] Then, we have that \[F'(x)y(s)=y(s)-\int_0^1G(s,t)\left(\frac{3}{2}x(t)^{1/2}+x(t)\right)dt,\] so since \(F'(x^*(s))=I,\) \[\left\|F'(x^*)^{-1}(F'(x)-F'(y))\right\|\leq \frac{1}{8}\left(\frac{3}{2}\|x-y\|^{1/2}+\|x-y\|\right).\] Then, we get that \(\omega_0(t)= \omega(t)=\frac{1}{8}(\frac{3}{2}t^{1/2}+t), \omega_1(t)=1+w_0(t).\) The radii are \(r_1= 2.6303\) \(r_2=1.20504\) \(r_3=1.302.\) So we obtain \(r=1.\)
Acknowledgments
The author is really grateful to the editor and the anonymous reviewers for their constructive comments. He would also like to thank Kokou Essiomle, Tchilabalo E. Patchali and Essodina Takouda for their help during the preparation of the manuscript.Author Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.Conflicts of Interest
The author declares no conflict of interest.References
- Amat, S., Hernández, M. A., & Romero, N. (2012). Semilocal convergence of a sixth order iterative method for quadratic equations. Applied Numerical Mathematics, 62(7), 833-841. [Google Scholor]
- Argyros, I. K. (2007). Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C.K. and Wuytack L. Elsevier Publ. Company, New York. [Google Scholor]
- Argyros, I. K., Magreñán, A. A. (2017). Iterative Method and their Dynamics with Applications. CRC Press, New York, USA. [Google Scholor]
- Behl, R., Cordero, A., Motsa, S. S., & Torregrosa, J. R. (2017). Stable high-order iterative methods for solving nonlinear models. Applied Mathematics and Computation, 303, 70-88. [Google Scholor]
- Behl, R., Bhalla, S., Magreñán, A. A., & Kumar, S. (2020). An efficient high order iterative scheme for large nonlinear systems with dynamics. Computational and Applied Mathematics, 113249. https://doi.org/10.1016/j.cam.2020.113249. [Google Scholor]
- Cordero, A., Hueso, J. L., Martínez, E., & Torregrosa, J. R. (2010). A modified Newton-Jarratt’s composition. Numerical Algorithms, 55(1), 87-99. [Google Scholor]
- Magreñán, A. A. (2014). Different anomalies in a Jarratt family of iterative root-finding methods. Applied Mathematics and Computation, 233, 29-38. [Google Scholor]
- Noor, M. A., & Wassem, M. (2009). Some iterative methods for solving a system of nonlinear equations. Applied Mathematics and Computation, 57, 101-106. [Google Scholor]
- Rheinboldt, W. C. (1977). An adaptive continuation process for solving systems of nonlinear equations. In: Mathematical Models and Numerical Methods (A. N. Tikhonov et al., eds.) pub. 3, (1977), 129-142 Banach Center, Warsaw Poland. [Google Scholor]
- Traub, J. F. (1964). Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs. [Google Scholor]
- Sharma, J. R., & Arora, H. (2017). Improved Newton-like methods for solving systems of nonlinear equations. SeMA Journal, 74(2), 147-163. [Google Scholor]
- Weerakoon, S., & Fernando, T. G. I. (2000). A variant of Newton's method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87-93. [Google Scholor]