Open Journal of Mathematical Sciences
ISSN: 2523-0212 (Online) 2616-4906 (Print)
DOI: 10.30538/oms2021.0143
Convergence analysis for a fast class of multi-step Chebyshev-Halley-type methods under weak conditions
Samundra Regmi, Ioannis K. Argyros\(^1\), Santhosh George
Learning Commons, University of North Texas at Dallas, Dallas, TX, 75038, USA.; (S.R)
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA.; (I.K.A)
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India-575 025.; (S.G)
\(^{1}\)Corresponding Author: iargyros@cameron.edu
Abstract
Keywords:
1. Introduction
In this study, we consider the problem of approximating a locally unique solution \(x^*\) of the equation
The semi-local convergence of method (2) was given in [9] under the (\(C\)) conditions:
- (\(C_1\))     \(\|F''(x)\|\leq M\) for each \(x\in \Omega\),
- (\(C_2\))    \(F'(x_0)^{-1}\in \mathcal{L}(\mathcal{B}_2, \mathcal{B}_1)\) for some \(x_0\in \Omega\) and \(\|F'(x_0)^{-1}\|\leq \beta,\)
- (\(C_3\))    \(\|F'(x_0)^{-1}F9x_0)\|\leq \eta\) and
- (\(C_4\))    \(\|F''(x)-F''(y)\|\leq \varphi(\|x-y\|)\) for each \(x,y\in \Omega\) and \(\varphi\) satisfies \(\varphi(0)\geq 0,\, \varphi(ts)\leq t^a\varphi(s)\) for each \(t\in [0,1], s\in (0, +\infty), 0\leq a\leq 1.\)
In this study, we present the local convergence not given in [9] and drop condition (\(C_4\)). This way we expand the applicability of method (2). Moreover, we refine Theorem 1 in [9] leading to a new semi-local convergence for method (2), a wider convergence region, tighter error bounds on the distances \(\|x_n-x^*\|\) and an at least as precise information on the location of the solution. These advantages are obtained, since we find a more precise region where the iterate lie resulting to tighter Lipschitz constants as well as the aforementioned advantages. The new constants are special cases of the old ones, so the advantages are obtained under the same computational cost.
The study is structured as follows; Section 2 contain the local convergence followed by the semi-local convergence in Section 3. The numerical examples in Section 4 conclude this study.
2. Local convergence
The local convergence that follows is centered on some parameters and scalar functions. Let \(\gamma,\, \delta\in \mathbb{R}, w_0:[0, +\infty)\longrightarrow [0, +\infty)\) be a continuous and non-decreasing function with \(w_0(0)=0.\) Define parameter \(\rho_0\) bySuppose
Let \(U(x,\mu), \bar{U}(x,\mu)\) denote the open and closed balls in \(\mathcal{B}_1,\) respectively of center \(x\in \mathcal{B}_1\) and of radius \(\mu > 0.\) The local convergence of method (2) is based on the preceding notation and conditions (\(A\))
- (\(a_1\))     \(F:\Omega\subset \mathcal{B}_1\longrightarrow \mathcal{B}_2\) is a twice continuously Fréchet differentiable operator.
- (\(a_2\))     There exists \(x^*\in \Omega\) such that \(F(x^*)=0\) and \(F'(x^*)^{-1}\in \mathcal{L}( \mathcal{B}_1, \mathcal{B}_1).\)
- (\(a_3\))     There exists function \(w_0:[0, +\infty)\longrightarrow[0, +\infty)\) continuous and non-decreasing with \(w_0(0)=0\) such that for each \(x\in \Omega\) \[\|F'(x^*)^{-1}(F'(x)-F'(x^*))\|\leq w_0(\|x-x^*\|).\] Set \(\Omega_0=\Omega\cap U(x^*,\rho_0),\) where \(\rho_0\) is defined by (3).
- (\(a_4\))     There exist functions \(w:[0, \rho_0)\longrightarrow [0, +\infty),v:[0, \rho_0)\longrightarrow [0, +\infty), \bar{v}:[0, \rho_0)\longrightarrow [0, +\infty)\) continuous and nondecreasing with \(w(0)=0\) such that for each \(x,y\in \Omega_0\) \[\|F'(x^*)^{-1}(F'(x)-F'(y))\|\leq w(\|x-y\|),\] \[\|F'(x^*)^{-1}F'(x)\|\leq v(\|x-x^*\|)\] and \[\|F'(x^*)^{-1}F''(x)\|\leq \bar{v}(\|x-x^*\|).\]
- (\(a_5\))     \(\bar{U}(x^*,r)\subseteq D\) and (6) hold, where \(r\) is given by (10).
- (\(a_6\))     There exists \(R\geq r\) such that \[\int_0^1w_0(\theta R)d\theta < 1.\]
Theorem 1. Suppose that the conditions (\(A\)) and (4)-(9) hold. Then, sequence \(\{x_n\}\) generated for \(x_0\in U(x^*,r)-\{x^*\}\) by method (2) is well defined in \(U(x^*,r)\) remains in \(U(x^*,r)\) for each \(n=0,1,2,\ldots\) and converges to \(x^*.\) Moreover, the following estimates hold
Proof. Mathematical induction is employed to show estimates (16)-(18). Using (3), (\(a_1\))-(\(a_3\)) and \(x_0\in U(x^*,r),\) we have that
Remark 1.
- (a)   Let \(w_0(t)=L_0t, \ \ w(t)=Lt\) and \(w^*(t)=L^*t\) (\(w^*\) replacing \(w\) in (\(a_4\))). In [9], authors used instead of (\(a_4\)) the condition
\begin{equation} \label{eq3.4*} \|F'(x_0)^{-1}(F'(x)-F'(y))\|\leq L^*\|x-y\|\  for \  each\  x, y\in \Omega. \end{equation}(33)\begin{equation} L\leq L^* \end{equation}(34)\begin{equation} L_0\leq L^*. \end{equation}(35)
- (b)  
The radius \(r_A=\frac{2}{2L_0+L}\) was obtained in [1,2,3] as the convergence radius for Newton's method under condition (\(a_1\))-(\(a_4\)). Notice that the convergence radius for Newton's method given independently by
Rheinboldt (see [1]) and Traub (see [2]) is given by
\begin{equation} \label{2.26} \rho=\frac{2}{3L^*}< r_A. \end{equation}(36)
Moreover, the new error bounds [1,2,3] are
\[ \|x_{n+1}-x^*\|\leq \frac{L}{1-L_0\|x_n-x^*\|}\|x_n-x^*\|^2, \] whereas the old ones [7,8] \[ \|x_{n+1}-x^*\|\leq \frac{L}{1-L\|x_n-x^*\|}\|x_n-x^*\|^2. \] Clearly, the new error bounds are more precise, if \(L_0< L\). Moreover, the radius of convergence of method (2) given by \(r\) is smaller than \(r_A\) (see (6)) . - (c)   The local results can be used for projection methods such as Arnoldi's method, the generalized minimum residual method(GMREM), the generalized conjugate method(GCM) for combined Newton/finite projection methods and in connection to the mesh independence principle in order to develop the cheapest and most efficient mesh refinement strategy [1,2,3,7,8].
- (d)   The results can be also be used to solve equations where the operator \(F'\) satisfies the autonomous differential equation [1,2,3]: \[F'(x)=p(F(x)),\] where \(p\) is a known continuous operator. Since \(F'(x^\ast)= p(F(x^\ast))=p(0),\) we can apply the results without actually knowing the solution \(x^\ast.\) Let as an example \(F(x)=e^x-1.\) Then, we can choose \(p(x)=x+1\) and \(x^\ast=0.\)
- (e)   It is worth noticing that convergence conditions for method (2) are not changing if we use the new instead of the old conditions [9]. Moreover, for the error bounds in practice we can use the computational order of convergence (COC) \[ \xi=\frac{ln\left(\frac{\|x_{n+2}-x_{n+1}\|}{\|x_{n+1}-x_{n}\|}\right)}{ln\left(\frac{\|x_{n+1}-x_{n}\|}{\|x_{n}-x_{n-1}\|}\right)},\quad \  for \  each \  n=1,2,\ldots \] or the approximate computational order of convergence (ACOC) \[ \xi^*=\frac{ln\left(\frac{\|x_{n+2}-x^*\|}{\|x_{n+1}-x^*\|}\right)}{ln\left(\frac{\|x_{n+1}-x^*\|}{\|x_{n}-x^*\|}\right)},\quad \  for \  each \  n=0,1,2,\ldots , \] instead of the error bounds obtained in Theorem 1.
- (f)  In view of (\(a_3\)) and the estimate \begin{eqnarray*} \|F'(x^\ast)^{-1}F'(x)\|&=&\|F'(x^\ast)^{-1}(F'(x)-F'(x^\ast))+I\|\\ &\leq& 1+\|F'(x^\ast)^{-1}(F'(x)-F'(x^\ast))\| \leq 1+L_0\|x-x^\ast\|, \end{eqnarray*} the second condition in (\(a_4\)) can be dropped and can be replaced by \[v(t)=1+L_0 t,\] or \[v(t)=M=2,\] since \(t\in [0, \frac{1}{L_0}).\)
3. Semi-local convergence analysis
Let us modify the (\(C\)) conditions ( given in a non-affine invariant form in [9]) so as to be given in affine invariant form as well as introduce the notion of the restricted convergence region. The conditions (\(\bar{C}\)) are;Definition 1. The set \(T=T(F,x_0,y_0)\) belong to class \(K=K(L_0, L, L_1, L_2 ,\eta _0, \eta),\) if
- (\(\bar{C}_0\))     (\(\bar{C}_0 \))=(\(a_1\)).
- (\(\bar{C}_1\))     There exists \(x_0\in \Omega\) such that \(F'(x_0)^{-1}\in \mathcal{L}(\mathcal{B}_2, \mathcal{B}_1).\)
- (\(\bar{C}_2\))    \(\|F'(x_0)^{-1}(F'(x)-F'(x_0))\|\leq M_0\|x-x_0\| \  for \  each \  x\in \Omega.\) Set \(\Omega_0=\Omega \cap B(x_0, \frac{1}{M_0}),\) we get \(\|F'(x_0)^{-1}F''(x)\|\leq\bar{M} \  for \  each \  x\in \Omega_0.\)
- (\(\bar{C}_3\))    (\(\bar{C}_3\))=(\(C_3\)).
- (\(\bar{C}_4\))    \(\|F'(x_0)^{-1}(F''(x)-F''(y))\|\leq\bar{\varphi}(\|x-y\|) \  for \  each \  x,y\in \Omega_0,\) where \(\bar{\varphi}\) is as \(\varphi.\)
Theorem 2. Suppose that the conditions (\(\bar{C}\)) hold and \(\bar{U}(x_0, \bar{R} \eta)\subseteq \Omega,\) where \(\bar{R}=\frac{\bar{\varphi}(\bar{a}_0)}{1-\bar{c}_0}.\) For \(\bar{a}_0=M\eta, b_0=\eta\bar{\varphi}(\eta), \bar{c}_0=\bar{h}(\bar{a}_0)\bar{\psi}(\bar{a}_0, \bar{b}_0).\) Suppose \(\bar{a}_0 < \bar{s}^*\) and \(\bar{h}(\bar{a}_0)\bar{c}_0 < 1,\) where \(\bar{s}^*\) is the smallest positive solution of equation \(\bar{\varphi}(t)t-1 =0.\) Then, sequence \(\{x_n\}\) starting from \(x_0\in \Omega_0\) and generated by method (2) is well defined in \(U(x_0, \bar{R}\eta),\) remains in \(U(x_0, \bar{R}\eta)\) for each \(n=0,1,2,\ldots\) and converges to a unique solution \(x^*\in \Omega_1=\Omega\cap U(x_0, \tilde{R}),\) where \(\tilde{R}=\frac{2}{\bar{a}_0}-\bar{R}.\) Moreover, the following estimates hold; \[\|x_n-x^*\|\leq e_n,\] where \(e_n=\bar{\varphi}(\bar{a}_0)\eta \alpha^n\varepsilon^{\frac{(4+3a)^n-1}{3+3a}}\frac{1}{1-\alpha \varepsilon^{(4+3a)^n}},\ \ \ \) \(\alpha=\frac{1}{\bar{h}(\bar{a}_0)},\ \ \ \varepsilon=\bar{h}(\bar{a}_0)\bar{c}_0.\)
Proof. Simply notice that iterate belong in \(\Omega_0\) which is a more precise location than \(\Omega\) used in [9]. Hence, the proof of Theorem 1 can be repeated but using the bar constants and functions instead of the non bar constants and functions, which is an important modification (see also the Remark that follows).
Remark 2. In view of (37)-(39), we have the advantages mentioned in the introduction of this study. For example, \[a_0=M\eta < s^* \Longrightarrow \bar{a}_0=\bar{M}\eta < \bar{s}^*,\] \[h(a_0)c_0 < 1 \Longrightarrow \bar{h}(\bar{a}_0)\bar{c}_0 < 1,\] but not necessarily vice versa, where \(s^*\) is the smallest positive solution of \(\varphi(t)t-1=0.\)
4. Numerical examples
We present the following example to test the convergence criteria.Example 1. Let \(\mathbb{B}_1=\mathbb{B}_2=\mathbb{R}^3\), \(D=B(0,1), x^*=(0,0,0)^T.\) Define \(F\) on \(D\) by
5. Conclusion
Major concerns in the study of the convergence for iterative methods (local or semilocal) are; the size of the convergence domain, the selection of the initial point and the uniqueness of the solution. We address these problems using method (2) under sufficient convergence conditions which are weaker than the ones in [9] for the semilocal convergence case. This way we extend the convergence domain require fewer iterates to achieve a desired error tolerance and provide a better location on the location of the solution. We also examine the local convergence case not studied in [9]. In the future we will employ our technique to extend the applicability of other iterative methods too.Author Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.Conflicts of Interest
The authors declare no conflict of interest.References
- Argyros, I. K., & Magreñán, A. A. (2017). Iterative Methods and Their Dynamics with Applications. CRC Press, New York.[Google Scholor]
- Argyros, I. K., George, S., & Thapa, N. (2018). Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications, (Volume-I). Nova Publishes, New York. [Google Scholor]
- Argyros, I. K., George, S., & Thapa, N. (2020). Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications, (Volume-IV). Nova Publishes, New York. [Google Scholor]
- Ezquerro, J. A., & Hernández, M. A. (2005). On the R-order of the Halley method. Journal of Mathematical Analysis and Applications, 303, 591-601. [Google Scholor]
- Hernández, M. A., & Salanova, M. A. (2000). Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method. Journal of Computational and Applied Mathematics, 126, 131-143. [Google Scholor]
- Hernández, M. A., & Martinez, E. (2015). On the semilocal convergence of a three step Newton-type iterative process under mild convergence conditions. Numerical Algorithms, 70(2), 377-392. [Google Scholor]
- Magreñán, A. A., &Argyros, I. K. (2017). Iterative Algorithms II. Nova Publishes, New York. [Google Scholor]
- Singh, M. K., & Singh, A. K. (2020). Variants of Newton's method using Simpson's \(3/8^{th}\) rule. International Journal of Applied and Computational Mathematics, 6, Article No. 20. [Google Scholor]
- Wang, X., & Kou, J. (2018). Semilocal convergence for a class of improved multi-step Chebyshev-Halley-like methods under extended conditions. Journal of Fixed Point Theory and Applications, 20, Article No. 122. [Google Scholor]