OMA – Vol 2 – Issue 2 (2018) – PISRT https://old.pisrt.org Sat, 09 Mar 2019 19:50:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Higher order nonlinear equation solvers and their dynamical behavior https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/higher-order-nonlinear-equation-solvers-and-their-dynamical-behavior/ Mon, 31 Dec 2018 17:24:59 +0000 https://old.pisrt.org/?p=1713
OMA-Vol. 2 (2018), Issue 2, pp. 172–193 | Open Access Full-Text PDF
Sabir Yasin, Amir Naseem
Abstract:In this report we present new sixth order iterative methods for solving non-linear equations. The derivation of these methods is purely based on variational iteration technique. To check the validity and efficiency we compare of methods with Newton's method, Ostrowski's method, Traub's method and modified Halleys's method by solving some test examples. Numerical results shows that our developed methods are more effective. Finally, we compare polynomigraphs of our developed methods with Newton's method, Ostrowski's method, Traub's method and modified Halleys's method.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Higher order nonlinear equation solvers and their dynamical behavior

Sabir Yasin\(^1\), Amir Naseem
Department of Mathematics, University of Lahore, Pakpattan Campus, Lahore Pakistan.; (S.Y)
Department of Mathematics, University of Management and Technology, Lahore Pakistan.;(A.N)

\(^{1}\)Corresponding Author;  sabiryasin77@gmail.com

Copyright © 2018 Sabir Yasin and Amir Naseem. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this report we present new sixth order iterative methods for solving non-linear equations. The derivation of these methods is purely based on variational iteration technique. To check the validity and efficiency we compare of methods with Newton’s method, Ostrowski’s method, Traub’s method and modified Halleys’s method by solving some test examples. Numerical results shows that our developed methods are more effective. Finally, we compare polynomigraphs of our developed methods with Newton’s method, Ostrowski’s method, Traub’s method and modified Halleys’s method.

Keywords:

Non-linear equations, Newton’s method, Polynomiography.

1. Introduction

One of the most important problems is to find the values of \(x\) which satisfy the equation $$ f(x)=0. $$ The solution of these problems has many applications in applied sciences. In order to solve these problems, various numerical methods have been developed using different techniques such as adomian decomposition, Taylor's series, perturbation method, quadrature formulas and variational iteration techniques [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 22] and the references therein.
One of the most famous and oldest method for solving non linear equations is classical Newton's method which can be written as:
\begin{equation}\label{} x_{n+1}=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}, n=0,1,2,... \end{equation}
(1)
This is an important and basic method, which converges quadratically [12].
Modifications of Newton's method gave various iterative methods with better convergence order. Some of them are given in [3, 8 9, 10, 11 18, 19] and the references therein. In [23], Traub developed following Double Newton's method: $$y_{n}=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},$$ $$x_{n+1}=y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})},\,\,\,n=0,1,2,3,...$$ This method is also known as Traub's Method.
Ostrowsk' method (see [24, 25, 26]) is also a well known iterative method which has forth order convergence. $$y_{n}=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},$$ $$x_{n+1}=y_{n}-\frac{f(x_n)f(y_{n})}{f^\prime(x_n)f(x_n)-2f(y_{n})},\,\,\,n=0,1,2,3,...$$ Noor et al. [27] developed modified Halleys's method which has fifth-order convergence $$y_{n}=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},$$ $$x_{n+1}=y_{n}-\frac{f(x_n)f(y_{n})f^\prime(y_{n})}{2f(x_n)f^{\prime 2}(y_n)-f^{\prime 2}(x_n)f(y_n)+f^\prime(x_n)f^\prime(y_n)f(y_{n})},\,\,\,n=0,1,2,3,...$$ In this paper, we develop three new iterative methods using variational iteration technique. The variational iteration technique was developed by He [14]. Using this technique, Noor and Shah [17] has suggested and analyzed some iterative methods for solving the nonlinear equations. The purpose of this technique was to solve a variety of diverse problems [14, 15] . Now we have applied the described technique to obtain higher-order iterative methods. We also discuss the convergence criteria of these new iterative methods. Several examples are given to show the performance of our proposed methods as compare to the other similar existing methods. We also compare polynomigraphs of our developed methods with Newton's method, Ostrowski's method, Traub's method and modified Halleys's method.

2. Construction OF Iterative Methods using Variational Technique

In this section, we develop some new sixth order iterative methods for solving non linear equations. By using variational iteration technique, we develop the main recurrence relation from which we derive the new iterative methods for solving non linear equations by considering some special cases of the auxiliary functions \(g\). These are multi-step methods consisting of predictor and corrector steps. The convergence of our methods is better than the one-step methods. Now consider the non-linear equation of the form
\begin{equation}\label{1} f(x)=0. \end{equation}
(2)
Suppose that \(\alpha\) is the simple root and \(\gamma\) is the initial guess sufficiently close to \(\alpha \). Let \(g(x)\) be any arbitrary function and \(\lambda\) be a parameter which is usually called the Lagrange’s multiplier and can be identified by the optimality condition. Consider the auxiliary function
\begin{equation}\label{2} H(x)=\psi(x)+\lambda[f(\psi(x)g(\psi(x)], \end{equation}
(3)
where \(\psi(x)\) is the arbitrary auxiliary function of order \(p\) with \(p\geq{1}\).
Using the optimality criteria, we can obtain the value of \(\lambda\) from (3) as:
\begin{equation}\label{3} \lambda=-\frac{\psi(x)}{g^{\prime }(\psi(x))f(\psi(x))+g(\psi(x))f^{\prime }(\psi(x))}. \end{equation}
(4)
From (3) and (4), we get
\begin{equation}\label{4} H(x)=\psi(x)-\frac{f(\psi(x))g(\psi(x))}{[f^{\prime }(\psi(x))g(\psi(x))+ f(\psi(x))g^{\prime }(\psi(x))]}. \end{equation}
(5)
Now we are going to apply (5) for constructing a general iterative scheme for iterative methods. For this, suppose that
\begin{equation}\label{5} \psi(x)=y=x-\frac{f(x)}{f^{\prime }(x)}-\frac{f^{2}(x)f^{% \prime \prime }(x)}{2f^{\prime 3}(x)}-\frac{f^{3}(x)f^{% \prime \prime \prime }(x)}{6f^{\prime 4}(x)}. \end{equation}
(6)
Which is well known Abbasbanday's method of 3rd order of convergence. With the help of (5) and (6), we can write
\begin{equation}\label{6} H(x)=y-\frac{f(y)g(y)}{[f^{\prime }(y)g(y)+ f(y)g^{\prime }(y)]}, \end{equation}
(7)
if \(\alpha\) is the root of \(f(x)\), then for \(x=\alpha\), we can write:
\begin{equation}\label{7} \frac{g(y)}{g^{\prime }(y)}=\alpha-\frac{f(\alpha)}{f^{\prime }(\alpha)}-\frac{f^{2}(\alpha)f^{% \prime \prime }(\alpha)}{2f^{\prime 3}(\alpha)}-\frac{f^{3}(\alpha)f^{% \prime \prime \prime }(\alpha)}{6f^{\prime 4}(\alpha)} =\frac{g(\alpha)}{g^{\prime }(\alpha)}. \end{equation}
(8)
Also,
\begin{equation}\label{8} \frac{g(x)}{g^{\prime }(x)}=\frac{g(\alpha)}{g^{\prime }(\alpha)}. \end{equation}
(9)
With the help of (8) and (9), we get
\begin{equation}\label{9} \frac{g(y)}{g^{\prime }(y)}=\frac{g(x)}{g^{\prime }(x)}. \end{equation}
(10)
Using (10) in (7), we obtain
\begin{equation}\label{10} H(x)=y-\frac{f(y)g(x)}{[f^{\prime }(y)g(x)+ f(y)g^{\prime }(x)]}. \end{equation}
(11)
Which enable us to define the following iterative scheme:
\begin{equation} x_{n+1}=y_{n}-\frac{f(y_n)g(x_n)}{[f^{'}(y_n)g(x_n)+f(y_n)g^{'}(x_n)]} \end{equation}
(12)
where \(y_n=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})}\). Relation(12) is the main and general iterative scheme, which we use to deduce iterative methods for solving non-linear equations by considering some special cases of the auxiliary functions \(g\).
2.1. case 1 Let \(g(x_n)=e^{(\beta x_n)}\), then \(g^{\prime}(x_n)= \beta g(x_n)\). Using these values in (12), we obtain the following algorithm.
Using these values in (12), we obtain the following algorithm.

Algorithm 2.1. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+\beta f(y_n)]}. \end{eqnarray*}

2.2. case 2 Let \(g(x_n)=e^{\beta f(x_n)}\), then \(g^{\prime}(x_n)= \beta f^{\prime }(x_{n})g(x_n)\). Using these values in (12), we obtain the following algorithm.

Algorithm 2.2. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+\beta f(y_n)f^{\prime }(x_n)]}. \end{eqnarray*}

2.2. case 3 Let \(g(x_n)=e^{-\frac {\beta}{ f(x_n)}}\), then \(g^{\prime}(x_n)= \beta \frac {f^{\prime }(x_n)}{f^2(x_n)}g(x_n)\). Using these values in (12), we obtain the following algorithm.

Algorithm 2.3. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f^{2}(x_n)f(y_n)}{[f^{2}(x_n)f^{\prime }(y_n)+\beta f^{\prime}(x_n) f(y_n)]}. \end{eqnarray*}

By assuming different values of \(\beta\), we can obtain different iterative methods. To obtain best results in all above algorithms, choose such values of \(\beta\) that make the denominator non-zero and greatest in magnitude.

3. Convergence Analysis

In this section, we discuss the convergence order of the main and general iteration scheme (12).

Theorem 3.1. Suppose that \(\alpha \) is a root of the equation \(f(x)=0\). If \(f(x)\) is sufficiently smooth in the neighborhood of \(\alpha \), then the convergence order of the main and general iteration scheme, described in relation (12) is at least six.

Proof. To analysis the convergence of the main and general iteration scheme, described in relation (12), suppose that \(\alpha \) is a root of the equation \(f(x)=0\) and \(e_n\) be the error at nth iteration, then \(e_n=x_n-\alpha\) and by using Taylor series expansion, we have \begin{eqnarray*} f(x)&=&{f^{\prime }(\alpha)e_n}+\frac{1}{2!}{f^{\prime \prime }(\alpha)e_n^2}+\frac{1}{3!}{f^{\prime \prime \prime }(\alpha)e_n^3}+\frac{1}{4!}{f^{(iv) }(\alpha)e_n^4}+\frac{1}{5!}{f^{(v) }(\alpha)e_n^5}\\ &+&\frac{1}{6!}{f^{(vi) }(\alpha)e_n^6}+O(e_n^{7}) \end{eqnarray*}

\begin{eqnarray} f(x)={f^{\prime }(\alpha)}[e_n+c_2e_n^2+c_3e_n^3+c_4e_n^4+c_5e_n^5+c_6e_n^6+O(e_n^{7})] \end{eqnarray}
(13)
\begin{eqnarray} {f^{\prime }(x_n)}&=&{f^{\prime }(\alpha)}[1+2c_2e_n+3c_3e_n^2+4c_4e_n^3+5c_5e_n^4+6c_6e_n^5+7c_7e_n^6\notag\\ &+&O(e_n^{7})]\label{15} \end{eqnarray}
(14)
\begin{eqnarray} {f^{\prime \prime}(x_n)}&=&{f^{\prime}(\alpha)}[2c_2+6c_3e_n+12c_4e_n^2+20c_5e_n^3+30c_6e_n^4+42c_7e_n^5+56c_8e_n^6\notag\\ &+&O(e_n^{7})] \end{eqnarray}
(15)
\begin{eqnarray} {f^{\prime \prime\prime}(x_n)}&=&{f^{\prime}(\alpha)}[6c_3+24c_4e_n+60c_5e_n^2+120c_6e_n^3+210c_7e_n^4+336c_8e_n^5+504c_9e_n^6\notag\\ &+&O(e_n^{7})]. \end{eqnarray}
(16)
Where $$c_n=\frac{1}{n!}\frac{{f^{(n) }(\alpha)}}{{f^{\prime }(\alpha)}}.$$ With the help of 13, 14 and 15, we get
\begin{eqnarray} y_n&=& \alpha+(-2c_3+2c_2^2)e_n^3+(-7c_4+17c_2c_3-9c_2^3)e_n^4+(-16c_5+44c_2c_4+24c_3^2-82c_3c_2^2+30c_2^4)e_n^5\notag\\ &+&(-30c_6+90c_2c_5+104c_3c_4-202c_2c_3^2+314c_3c_2^3-188c_4c_2^2-88c_2^5)e_n^6+O(e_n^{7}) \label{16} \end{eqnarray}
(17)
\begin{eqnarray} f(y_n)&=&{f^{\prime}(\alpha)}[(-2c_3+2c_2^2)e_n^3+(-7c_4+17c_2c_3-9c_2^3)e_n^4+(-16c_5+44c_2c_4+24c_3^2-82c_3c_2^2\notag\\ &+&30c_2^4)e_n^5+(-198c_2c_3^2+306c_3c_2^3-84c_2^5-30c_6+90c_2c_5+104c_3c_4-188c_4c_2^2)e_n^6\notag\\ &+&O(e_n^{7})]\label{17} \end{eqnarray}
(18)
\begin{eqnarray} {f^{\prime}(y_n)}&=&{f^{\prime}(\alpha)}[1+(-4c_2c_3+4c_2^3)e_n^3+(34c_3c_2^2-18c_2^4-14c_2c_4)e_n^4+(-32c_2c_5+88c_4c_2^2+48c_2c_3^2\notag\\ &-&164c_3c_2^3+60c_2^5)e_n^5+(-60c_2c_6+180c_5c_2^2+208c_4c_2c_3-428c_3^2c_2^2+640c_3c_2^4-376c_4c_2^3\notag\\ &-&176c_2^6+12c_3^3)e_n^6+O(e_n^{7})]\label{18} \end{eqnarray}
(19)
\begin{eqnarray} g(x_n)&=& g(\alpha)+g^{\prime}(\alpha)e_n+\frac{g^{\prime \prime}(\alpha)}{2!}e_n^2+\frac{g^{\prime\prime\prime}(\alpha)}{3!}e_n^3+\frac{g^{(iv)}(\alpha)}{4!}e_n^4+\frac{g^{(v)}(\alpha)}{5!}e_n^5+ \frac{g^{(vi)}(\alpha)}{6!}e_n^6\notag\\ &+& O(e_n^{7})\label{20} \end{eqnarray}
(20)
\begin{eqnarray} g^{\prime}(x_n)& = & g^{\prime}(\alpha)+g^{\prime\prime}(\alpha)e_n+\frac{g^{\prime\prime\prime}(\alpha)}{2!}e_n^2+\frac{g^{(iv)}(\alpha)}{3!}e_n^3+\frac{g^{(v)}(\alpha)}{4!}e_n^{4}+\frac{g^{(vi)}(\alpha)}{5!}e_n^5+\frac{g^{(vii)}(\alpha)}{6!}e_n^6\notag\\ &+& O(e_n^{7}). \end{eqnarray}
(21)
Using equations (13-20) in general iteration scheme(12), we get: \begin{eqnarray*} x_{n+1}&=&\alpha+\frac{4(-c_3+c_2^2)^2[g(\alpha)c_2+g^{\prime}(\alpha)]}{g(\alpha)}e_n^6+O(e_n^{7}), \end{eqnarray*} which implies that \begin{eqnarray*} e_{n+1}&=&\frac{4(-c_3+c_2^2)^2[g(\alpha)c_2+g^{\prime}(\alpha)]}{g(\alpha)}e_n^6+O(e_n^{7}). \end{eqnarray*} The above relation shows that the main and general iteration scheme(12) is of sixth order of convergence and all iterative methods deduce from it have also convergence of order six.

4. Applications

In this section we included some nonlinear functions to illustrate the efficiency of our developed algorithms for \(\beta = 1\). We compare our developed algorithms with Newton's method (NM)[12] , Ostrowski'si method (OM) [7], Traub's method (TM)[12], and modified Halley's method (MHM) [28]. We used \(\varepsilon =10^{-15}\). The following stopping criteria is used for computer programs:
  1. \(|x_{n+1}-x_{n+1}|< \varepsilon .\)
  2. \(|f(x_{n+1})|< \varepsilon .\)
Table 1. Comparison of various iterative methods
\(f_{1}=x^{3}+4x^{2}-10,x_{0}=-0.7\)
Mrthods N \(N_{f}\) |\(f(x_{n+1})\)| \(x_{n+1}\)
NM \(20\) \(40\) \(1.056394e-24\)
OM \(51\)
\(153\) \(9.750058e-26\)
TM \(10\) \(30\) \(1.056394e-24\)
MHM \(30\) \(90\) \(2.819181e-35\) \(1.365230013414096845760806828980\)
Algorithm 2.1 \(3\) \(9\) \(1.037275e-74\)
Algorithm 2.2 \(3\) \(9\) \(7.962504e-39\)
Algorithm 2.3 \(3\) \(9\) \(2.748246e-39\)

Table 2. Comparison of various iterative methods
\(f_{2}=\ln(x)+x,x_{0}=2.6.\)
Mrthods N \(N_{f}\) |\(f(x_{n+1})\)| \(x_{n+1}\)
NM \(8\) \(16\) \(6.089805e-28\)
OM \(4\)
\(12\) \(3.421972e-53\)
TM \(4\) \(12\) \(6.089805e-28\)
MHM \(4\) \(21\) \(4.247135e-28\) \(0.567143290409783872999968662210\)
Algorithm 2.1 \(3\) \(9\) \(1.034564e-21\)
Algorithm 2.2 \(3\) \(9\) \(1.994520e-38\)
Algorithm 2.3 \(3\) \(9\) \(7.258268e-15\)

Table 3. Comparison of various iterative methods
\(f_{3}=\ln{x}+\cos(x),x_{0}=0.1.\)
Mrthods N \(N_{f}\) |\(f(x_{n+1})\)| \(x_{n+1}\)
NM \(6\) \(12\) \(2.313773e-18\)
OM \(3\)
\(9\) \(5.848674e-26\)
TM \(3\) \(9\) \(2.313773e-18\)
MHM \(4\) \(12\) \(1.281868e-60\) \(0.397748475958746982312388340926\)
Algorithm 2.1 \(2\) \(6\) \(2.915840e-34\)
Algorithm 2.2 \(2\) \(6\) \(3.380437e-32\)
Algorithm 2.3 \(2\) \(6\) \(5.839120e-23\)

Table 4. Comparison of various iterative methods
\(f_{4}=xe^{x}-1,x_{0}=1.\)
Mrthods N \(N_{f}\) |\(f(x_{n+1})\)| \(x_{n+1}\)
NM \(5\) \(10\) \(8.478184e-17\)
OM \(3\)
\(9\) \(8.984315e-40\)
TM \(3\) \(9\) \(2.130596e-33\)
MHM \(3\) \(9\) \(1.116440e-68\) \(0.567143290409783872999968662210\)
Algorithm 2.1 \(2\) \(6\) \(5.078168e-24\)
Algorithm 2.2 \(3\) \(9\) \(4.315182e-82\)
Algorithm 2.3 \(3\) \(9\) \(1.027667e-46\)

Table 5. Comparison of various iterative methods
\(f_{5}=x^{3}-1,x_{0}=2.3.\)
Mrthods N \(N_{f}\) |\(f(x_{n+1})\)| \(x_{n+1}\)
NM \(7\) \(14\) \(9.883568e-25\)
OM \(4\)
\(12\) \(2.619999e-56\)
TM \(4\) \(12\) \(3.256164e-49\)
MHM \(3\) \(9\) \(3.233029e-46\) \(1.000000000000000000000000000000\)
Algorithm 2.1 \(2\) \(6\) \(1.472038e-16\)
Algorithm 2.2 \(3\) \(9\) \(1.847778e-15\)
Algorithm 2.3 \(3\) \(9\) \(1.620804e-18\)
Table (1-5) Shows the numerical comparisons of Newton's method, Ostrowski'si method, Traub's method, modified Halley's method and our developed methods. The columns represent the number of iterations \(N\) and the number of functions or derivatives evaluations \(N_{f}\) required to meet the stopping criteria, and the magnitude \(|f(x)|\) of \(f(x)\) at the final estimate \(x_{n}.\)

5. Polynomiography

Polynomials are one of the most significant objects in many fields of mathematics. Polynomial root-finding has played a key role in the history of mathematics. It is one of the oldest and most deeply studied mathematical problems. The last interesting contribution to the polynomials root finding history was made by Kalantari [29], who introduced the polynomiography. As a method which generates nice looking graphics, it was patented by Kalantari in USA in 2005 [30, 31]. Polynomiography is defined to be " the art and science of visualization in approximation of the zeros of complex polynomials, via fractal and non fractal images created using the mathematical convergence properties of iteration functions" [29]. An individual image is called a "polynomiograph \textquotedblright . Polynomiography combines both art and science aspects.

Polynomiography gives a new way to solve the ancient problem by using new algorithms and computer technology. Polynomiography is based on the use of one or an infinite number of iteration methods formulated for the purpose of approximation of the root of polynomials e.g. Newton's method , Halley's method etc. The word "fractal" ,which partially appeared in the definition of polynomiography, was coined by the famous mathematician Benoit Mandelbrot [32]. Both fractal images and polynomiographs can be obtained via different iterative schemes. Fractals are self-similar has typical structure and independent of scale. On the other hand, polynomiographs are quite different. The " polynomiographer" can control the shape and designed in a more predictable way by using different iteration methods to the infinite variety of complex polynomials. Generally, fractals and polynomiographs belong to different classes of graphical objects. Polynomiography has diverse applications in math, science, education, art and design. According to Fundamental Theorem of Algebra, any complex polynomial with complex coefficients \(\left\{ a_{n},a_{n-1},...,a_{1},a_{0}\right\} \):

\begin{equation} p(z)=a_{n}z^{n}+a_{n-1}z^{n-1}+...+a_{1}z+a_{0} \label{5.1} \end{equation}
(22)
or by its zeros (roots) \(\left\{ r_{1},r_{2},...,r_{n-1},r_{n}\right\} :\)
\begin{equation} p(z)=(z-r_{1})(z-r_{2})...(z-r_{n}) \end{equation}
(23)
of degree \(n\) has \(n\) roots (zeros) which may or may not be distinct. The degree of polynomial describes the number of basins of attraction and placing roots on the complex plane manually localization of basins can be controlled.
Usually, polynomiographs are colored based on the number of iterations needed to obtain the approximation of some polynomial root with a given accuracy and a chosen iteration method. The description of polynomiography, its theoretical background and artistic applications are described in [29, 30, 31].

5.1. Iteration

During the last century, the different numerical techniques for solving nonlinear equation \(f(x)=0\) have been successfully applied. Now we define our developed algorithms as: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+\beta f(y_n)]}, \end{eqnarray*} which we call algorithm 2.1 for solving nonlinear equations. \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+\beta f(y_n)f^{\prime }(x_n)]} \end{eqnarray*} which we call algorithm 2.2 for solving nonlinear equations. \begin{eqnarray*} y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}-\frac{f^{2}(x_{n})f^{% \prime \prime }(x_{n})}{2f^{\prime 3}(x_{n})}-\frac{f^{3}(x_{n})f^{% \prime \prime \prime }(x_{n})}{6f^{\prime 4}(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f^{2}(x_n)f(y_n)}{[f^{2}(x_n)f^{\prime }(y_n)+\beta f^{\prime}(x_n) f(y_n)]} \end{eqnarray*} which we call algorithm 2.3 for solving nonlinear equations. Let \(p(z)\) be the complex polynomial, then \begin{eqnarray*} y_{n} &=&z_{n}-\frac{p(z_{n})}{p^{\prime }(z_{n})}-\frac{p^{2}(z_{n})p^{% \prime \prime }(z_{n})}{2p^{\prime 3}(z_{n})}-\frac{p^{3}(z_{n})p^{% \prime \prime \prime }(z_{n})}{6p^{\prime 4}(z_{n})},n=0,1,2,..., \\ z_{n+1}&=&y_n-\frac{p(y_n)}{[p^{\prime }(y_n)+\beta p(y_n)]} \end{eqnarray*} which is algorithm 2.1 for solving nonlinear complex equations. \begin{eqnarray*} y_{n} &=&z_{n}-\frac{p(z_{n})}{p^{\prime }(z_{n})}-\frac{p^{2}(z_{n})p^{% \prime \prime }(z_{n})}{2p^{\prime 3}(z_{n})}-\frac{p^{3}(z_{n})p^{% \prime \prime \prime }(z_{n})}{6p^{\prime 4}(z_{n})},n=0,1,2,..., \\ z_{n+1}&=&y_n-\frac{p(y_n)}{[p^{\prime }(y_n)+\beta p(y_n)p^{\prime }(z_n)]} \end{eqnarray*} which is algorithm 2.2 for solving nonlinear complex equations. \begin{eqnarray*} y_{n} &=&z_{n}-\frac{p(z_{n})}{p^{\prime }(z_{n})}-\frac{p^{2}(z_{n})p^{% \prime \prime }(z_{n})}{2p^{\prime 3}(z_{n})}-\frac{p^{3}(z_{n})p^{% \prime \prime \prime }(z_{n})}{6p^{\prime 4}(z_{n})},n=0,1,2,..., \\ z_{n+1}&=&y_n-\frac{p^{2}(z_n)p(y_n)}{[p^{2}(z_n)p^{\prime }(y_n)+\beta p^{\prime}(z_n) f(y_n)]} \end{eqnarray*} Which is algorithm 2.3 for solving nonlinear complex equations. Where \(z_{o}\in\mathbb{C}\) is a starting point. The sequence \( \{z_{n}\}_{n=0}^{\infty }\) is called the orbit of the point \(z_{o}\) converges to a root \(z^{\ast }\) of \(p\) then, we say that \(z_{o}\) is attracted to \(z^{\ast }\). A set of all such starting points for which \( \{z_{n}\}_{n=0}^{\infty }\) converges to root \(z^{\ast }\) is called the basin of attraction of \(z^{\ast }.\)

6. Convergence test

In the numerical algorithms that are based on iterative processes we need a stop criterion for the process, that is, a test that tells us that the process has converged or it is very near to the solution. This type of test is called a convergence test. Usually, in the iterative process that use a feedback, like the root finding methods, the standard convergence test has the following form:
\begin{equation} |z_{n+1}-z_{n}|< \varepsilon , \label{7.1} \end{equation}
(24)
where \(z_{n+1}\) and \(z_{n}\) are two successive points in the iteration process and \(\varepsilon >0\) is a given accuracy. In this paper we also use the stop criterion (24).

7. Applications

In this section we present some examples of polynomiographs for different complex polynomials equation \(p(z)=0\) and some special polynomials using our developed algoritms.The different colors of a images depend upon number of iterations to reach a root with given accuracy \(\varepsilon =0.001\). One can obtain infinitely many nice looking polynomiographs by changing parameter \(k,\) where \(k\) is the upper bound of the number of iterations.

7.0.1. Polynomiographs Of Different Complex Polynomial

In this section, we present polynomiographs of the following complex polynomials, using our developed methods for \(\beta=1\).

Example 7.1. Polynomiograph for \(z^{2}-1=0,\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 1. Polynomiographs of \(z^{2}-1=0.\)

Example 7.2. Polynomiograph for \(z^{3}-1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 2. Polynomiographs of \(z^{3}-1=0.\)

Example 7.3. Polynomiograph for \(z^{3}-z^2+1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 3. Polynomiographs of \(z^{3}-z^2+1=0.\)

Example 7.4. Polynomiograph for \(z^{4}-1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 4. Polynomiographs of \(z^{4}-1=0.\)

Example 7.5. Polynomiograph for \(z^{4}-z^2-1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 5. Polynomiographs of \(z^{4}-z^2-1=0.\)

Example 7.6. Polynomiograph for \(z^{5}-1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 6. Polynomiographs of \(z^{5}-1=0.\)

Example 7.7. Polynomiograph for \(z^{5}-z^3+2=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 7. Polynomiographs of \(z^{5}-z^3+2=0.\)

Example 7.8. Polynomiograph for \(z^{6}-1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 8. Polynomiographs of \(z^{6}-1=0.\)

Example 7.9. Polynomiograph for \(z^{6}-z^4+4=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 9. Polynomiographs of \(z^{6}-z^4+4=0.\)

Example 7.10. Polynomiograph for \(z^{7}-1=0\) via Newton's method (row one left figure), Ostrowski's method (row one middle figure), Traub's method (row one right figure), modified Halleys's method (row two left figure), Algorithm (2.1) (row two middle figure), Algorithm (2.2) (row two right figure) and Algorithm (2.3) (row three) are given below

Figure 9. Polynomiographs of \(z^{7}-1=0.\)

Conclusions

We have established three new sixth order iterative methods for solving non linear functions. We solved some test examples to check the efficiency of our developed methods. Table 1-5 shows that our methods perform better than Newton's method, Ostrowski's method, Traub's method and modified Halleys's method. We also compare our methods with Newton's method, Ostrowski's method, Traub's method and modified Halleys's method by presenting polynomiographs of different complex polynomials.

Competing Interests

The authors declares that they have no competing interests.

References

  1. Nazeer, W., Naseem, A., Kang, S. M., & Kwun, Y. C. (2016). Generalized Newton Raphson's method free from second derivative. J. Nonlinear Sci. Appl., 9 (2016), 2823, 2831. [Google Scholor]
  2. Nazeer, W., Tanveer, M., Kang, S. M., & Naseem, A. (2016). A new Householder's method free from second derivatives for solving nonlinear equations and polynomiography. J. Nonlinear Sci. Appl., 9, 998-1007. [Google Scholor]
  3. Chun, C. (2006). Construction of Newton-like iteration methods for solving nonlinear equations. Numerische Mathematik, 104(3), 297-315.[Google Scholor]
  4. Burden, R. L., Faires, J. D., & Reynolds, A. C. (2001). Numerical analysis. [Google Scholor]
  5. Stoer, J., & Bulirsch, R. (2013). Introduction to numerical analysis (Vol. 12). Springer Science & Business Media. [Google Scholor]
  6. Quarteroni, A., Sacco, R., & Saleri, F. (2010). Numerical mathematics (Vol. 37). Springer Science & Business Media.[Google Scholor]
  7. Chen, D., Argyros, I. K., & Qian, Q. S. (1993). A note on the Halley method in Banach spaces. Applied Mathematics and Computation, 58(2-3), 215-224. [Google Scholor]
  8. Gutierrez, J. M., & Hernández, M. A. (2001). An acceleration of Newton's method: Super-Halley method. Applied Mathematics and Computation, 117(2-3), 223-239.[Google Scholor]
  9. Gutiérrez, J. M., & Hernandez, M. A. (1997). A family of Chebyshev-Halley type methods in Banach spaces. Bulletin of the Australian Mathematical Society, 55(1), 113-130. [Google Scholor]
  10. Householder, A. S. (1970). The numerical treatment of a single nonlinear equation. McGraw-Hill, New York. [Google Scholor]
  11. Sebah, P., & Gourdon, X. (2001). Newton’s method and high order iterations. Numbers Comput., 1-10. [Google Scholor]
  12. Traub, J. F. (1982). Iterative methods for the solution of equations (Vol. 312). American Mathematical Soc..[Google Scholor]
  13. Inokuti, M., Sekine, H., & Mura, T. (1978). General use of the Lagrange multiplier in nonlinear mathematical physics. Variational method in the mechanics of solids, 33(5), 156-162.[Google Scholor]
  14. He, J. H. (2007). Variational iteration method—some recent results and new interpretations. Journal of computational and applied mathematics, 207(1), 3-17.[Google Scholor]
  15. He, J. H. (1999). Variational iteration method–a kind of non-linear analytical technique: some examples. International journal of non-linear mechanics, 34(4), 699-708.[Google Scholor]
  16. Noor, M. A. (2007). New classes of iterative methods for nonlinear equations. Applied Mathematics and Computation, 191(1), 128-131. [Google Scholor]
  17. Noor, M. A., & Shah, F. A. (2009). Variational iteration technique for solving nonlinear equations. Journal of Applied Mathematics and Computing, 31(1-2), 247-254. [Google Scholor]
  18. Kou, J. (2007). The improvements of modified Newton’s method. Applied Mathematics and Computation, 189(1), 602-609.[Google Scholor]
  19. Abbasbandy, S. (2003). Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 145(2-3), 887-893.[Google Scholor]
  20. Naseem, A., Awan, M. W., & Nazeer, W. (2016). Dynamics of an iterative method for nonlinear equations. Sci. Int.(Lahore), 28(2), 819-823. [Google Scholor]
  21. Naseem, A., Nazeer, W., \& Awan, M. W. (2016). Polynomiography via modified Abbasbanday’s method. Sci. Int.(Lahore), 28(2), 761-766. [Google Scholor]
  22. Naseem, A., Attari, M. Y., & Awan, M. W. (2016). Polynomiography via modified Golbabi and Javidi's method. Sci. Int.(Lahore), 28(2).\emph{Sci. Int.(Lahore)}, 28(2), 867-871. [Google Scholor]
  23. Traub, J. F. (1982). Iterative methods for the solution of equations (Vol. 312). American Mathematical Soc.. [Google Scholor]
  24. Nawaz. M., Naseem. A., Nazeer, W. (2018). New iterative methods using variational iteration technique and their dynamical behavior. Open. Math. Anal., 2(2), 01-09.
  25. Noor, M. A., & Shah, F. A. (2009). Variational iteration technique for solving nonlinear equations. Journal of Applied Mathematics and Computing, 31(1-2), 247-254. [Google Scholor]
  26. Noor, M. A., Shah, F. A., Noor, K. I., & Al-Said, E. (2011). Variational iteration technique for finding multiple roots of nonlinear equations. Scientific Research and Essays, 6(6), 1344-1350. [Google Scholor]
  27. Noor, M. A., Khan, W. A., & Hussain, A. (2007). A new modified Halley method without second derivatives for nonlinear equation. Applied Mathematics and Computation, 189(2), 1268-1273. [Google Scholor]
  28. Noor, K. I., & Noor, M. A. (2007). Predictor–corrector Halley method for nonlinear equations. Applied Mathematics and Computation, 188(2), 1587-1591. [Google Scholor]
  29. Kalantari, B. (2005). U.S. Patent No. 6,894,705. Washington, DC: U.S. Patent and Trademark Office.[Google Scholor]
  30. Kalantari, B. (2005). Polynomiography: from the fundamental theorem of Algebra to art. Leonardo, 38(3), 233-238. [Google Scholor]
  31. Kotarski, W., Gdawiec, K., & Lisowska, A. (2012, July). Polynomiography via Ishikawa and Mann iterations. In International Symposium on Visual Computing (pp. 305-313). Springer, Berlin, Heidelberg. [Google Scholor]
  32. Kalantari, B. (2009). Polynomial root-finding and polynomiography. World Scientific, Singapore. [Google Scholor]
]]>
Global solution and asymptotic behaviour for a wave equation type \(p\)-laplacian with memory https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/global-solution-and-asymptotic-behaviour-for-a-wave-equation-type-p-laplacian-with-memory/ Sat, 15 Dec 2018 06:02:50 +0000 https://old.pisrt.org/?p=1668
OMA-Vol. 2 (2018), Issue 2, pp. 156–171 | Open Access Full-Text PDF
Carlos Alberto Raposo, Adriano Pedreira Cattai, Joilson Oliveira Ribeiro
Abstract:In this work we study the global solution, uniqueness and asymptotic behaviour of the nonlinear equation \begin{eqnarray*} u_{tt} - \Delta_{p} u = \Delta u - g*\Delta u \end{eqnarray*} where \(\Delta_{p} u\) is the nonlinear \(p\)-Laplacian operator, \(p \geq 2\) and \(g*\Delta u\) is a memory damping. The global solution is constructed by means of the Faedo-Galerkin approximations taking into account that the initial data is in appropriated set of stability created from the Nehari manifold and the asymptotic behavior is obtained by using a result of P. Martinez based on new inequality that generalizes the results of Haraux and Nakao.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Global solution and asymptotic behaviour for a wave equation type \(p\)-laplacian with memory

Carlos Alberto Raposo\(^{1}\), Adriano Pedreira Cattai, Joilson Oliveira Ribeiro
Mathematics Department, Federal University of São João del-Rey 36307-352 São João
Mathematics Department, State University of Bahia 41150-000 Salvador-BA, Brasil.;(A.P.C)
Mathematics Department, Federal University of Bahia 40170-110 Salvador-BA, Brasil.; (J.O.R)
\(^{1}\)Corresponding Author;  raposo@ufsj.edu.br

Copyright © 2018 Carlos Alberto Raposo, Adriano Pedreira Cattai, Joilson Oliveira Ribeiro. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this work we study the global solution, uniqueness and asymptotic behaviour of the nonlinear equation
\begin{eqnarray*}
u_{tt} – \Delta_{p} u = \Delta u – g*\Delta u
\end{eqnarray*}
where \(\Delta_{p} u\) is the nonlinear \(p\)-Laplacian operator, \(p \geq 2\) and \(g*\Delta u\) is a memory damping. The global solution is constructed by means of the Faedo-Galerkin approximations taking into account that the initial data is in appropriated set of stability created from the Nehari manifold and the asymptotic behavior is obtained by using a result of P. Martinez based on new inequality that generalizes the results of Haraux and Nakao.

Keywords:

\(p\)-Laplacian operator; Global solution; Asymptotic behaviour; Memory.

1. Introduction

Throughout this paper we omit the space variable \(x\) of \(u(x,t)\), simply denote \(u(x,t)\) by \(u(t)\) when no confusion arises and \(c\) denotes various positive constants depending on the known constants and may be different at each appearance. We use the Sobolev Space with its properties as in R. A. Adams [1] and H. Brezis [2]. Let \(\Omega\in\mathbb{R}\) be a open and bounded interval, \(2\leq p < \infty\) and \(p'\) such that \(\dfrac{1}{p}+\dfrac{1}{p'}=1\). The duality pairing between the space \(W_0^{1,p}(\Omega)\) and its dual \(W^{-1,p'}(\Omega)\) will be denoted using the form \(\langle \,\cdot\,,\,\cdot\,\rangle_p\). According to Poincaré's inequality, the standard norm \(\|\,\cdot\,\|_{W_0^{1,p}(\Omega)}\) is equivalent to the norm \(\|\nabla \,\cdot\,\|_p\) on \(W_0^{1,p}(\Omega)\). Henceforth, we put \(\|\,\cdot\,\|_{W_0^{1,p}(\Omega)} = \|\nabla \,\cdot\,\|_p\). We denote \(\|\,\cdot\,\|_{L^2(\Omega)} = | \,\cdot\,|_{2}\) and the usual inner product by \(( \,\cdot\,,\,\cdot\,)\). We denote the \(p\)-Laplacian operator by \(\Delta_p u\), which can be extended to a monotone, bounded, hemicontinuous and coercive operator between the spaces \(W_0^{1,p}(\Omega)\) and its dual by $$\begin{array}{l} -\Delta_p \colon W_0^{1,p}(\Omega) \to W^{-1,p'}(\Omega)\\[3mm] \langle -\Delta_p u, v\rangle_p = \displaystyle\int_{\Omega} |\nabla u|^{p-2} \nabla u \nabla v \operatorname{d}\!x \end{array} $$ Nonlinear hyperbolic problems involving the \(p\)-Laplacian are becoming the object of increasing interest only in recent years. The existence of a global solution for wave equation of \(p\)-Laplacian type
\begin{eqnarray} u_{tt}-\Delta_p u =0 \label{p_Laplacian} \end{eqnarray}
(1)
without an additional dissipation term is an open problem. For \(n=1\), M. Derher [3] proved the local in time existence of solution and showed by a generic counter-example that the global in time solution can not be expected.

Adding a strong damping (\(-\Delta u_t\)) in (1) the well-posedness and asymptotic behavior was studied by J. M. Greenberg [4]. In fact, the strong damping plays an important role on the existence and stability for \(p\)-Laplacian wave equation see for instance for \(n\geq2\) [5, 6, 7, 8, 9, 10, 11, 12]. Nevertheless, if the strong damping is replaced by a weaker damping (\(u_t\)), then global existence and uniqueness are only know for \(n=1,2\), see [13, 14 ]. For the intermediary damping given by (\((-\Delta)^{\alpha} u_t\)), with \(0 < \alpha \leq 1\) in [15] was proved the global solution depending on the growth of a forcing term. The background of these problems are in physics, especially in solid mechanics.

From what we know this is the first time that a alternative damping for wave equation with the \(p\)-Laplacian operator is considered. In this work we consider a memory damping, acting only on \(\Delta u\) given by the usual convolution $$ g\ast \Delta u(x,t) = \displaystyle\int_{\Omega} g(t-s)\Delta u(x,s)\operatorname{d}\!s $$ with the kernel \(g\) as real-valued function. We have interest in proving the existence of a global solution and energy decay to the problem

\begin{equation} u_{tt} - \Delta_p u \!\!= \!\!\Delta u - g\ast \Delta u \quad\mbox{in}\quad \Omega \times [0,\infty), \label{eq:01} \end{equation}
(2)
\begin{equation} u(x,0) \!\!=\!\! u_0(x), ~~ u_t(x,0) = u_1(x), ~~ x\in\Omega, \label{eq:02} \end{equation}
(3)
\begin{equation} u(x,t)\!\!=\!\! 0 \quad\mbox{on}\quad \partial \Omega \times [0,\infty).\label{eq:03} \end{equation}
(4)
This paper is organized as follows. Section 2 deals with the potential well, we introduce the stability set for the problem. In the Section 3 we introduce some notations and preliminaries results. In the section 4 we introduce a suitable Galerkin basis. In the Section 5 we prove the existence of solution by Faedo-Galerkin method and finally in the Section 6 we use the result of P. Martinez [16] that generalizes the results of Haraux [17] and Nakao [18] to prove the energy decay in a appropriate set of stability.

2. The Potential Well

It is well known that the energy of a PDE system is, in some sense, split into kinetic and potential energy. Following the idea of Y. Ye [19] we are able to construct a set of stability as follows. We will prove that there is a valley or or a well of depth \(d\) created in the potential energy. If this height \(d\) is strictly positive, we find that, for solutions with initial data in the good part of the well, the potential energy of the solution can never escape the well. In general, it is possible for the energy from the source term to cause the blow-up in finite time. However in the good part of the well it remains bounded. As a result, the total energy of the solution remains finite on any time interval \([0, T)\), which provides the global existence of the solution. We started by introducing the functional \(J\,:\, W_0^{1,p}(\Omega) \rightarrow \mathbb{R}\) by

\begin{equation}\label{eq:04} J(u) = \dfrac{1}{p}\|\nabla u\|_{p}^p -\dfrac{1}{2}\displaystyle\int_{\Omega}\left(g\ast|\nabla u|^2\right)(t)\operatorname{d}\!x. \end{equation}
(5)
For \(u \in W_0^{1,p}(\Omega)\) we define the functional
\begin{equation} J(\lambda u) = \dfrac{\lambda^p}{p}\|\nabla u\|_{p}^p -\dfrac{\lambda}{2}\displaystyle\int_{\Omega}\left(g\ast|\nabla u|^2\right)(t)\operatorname{d}\ x,\,\, 0 < \lambda \leq 1. \end{equation}
(6)
Associated with the \(J\) we have the well known Nehari Manifold given by $$ \mathcal{N} \stackrel{\rm{def}}{=} \left\{ u \in W_0^{1,p}(\Omega)/\{0\}\,\,:\,\, \left[ \dfrac{\operatorname{d}}{\operatorname{d}\!\lambda} J(\lambda u)\right] _{\lambda=1}=0 \right\}.$$ From (6) we get $$ \dfrac{\operatorname{d}}{\operatorname{d}\!\lambda} J(\lambda u) = \lambda^{p-1}\|\nabla u\|_{p}^p -\dfrac{1}{2}\displaystyle\int_{\Omega} \left(g\ast|\nabla u|^2\right)(t) \operatorname{d}\!x, $$ then $$ \mathcal{N} \stackrel{\rm{def}}{=} \left\{ u \in W_0^{1,p}(\Omega)/\{0\}\,\,:\,\,\|\nabla u\|_{p}^p =\dfrac{1}{2}\displaystyle\int_{\Omega} \left(g\ast|\nabla u|^2\right)(t) \operatorname{d}\!x \right\}.$$ We define as in the Mountain Pass theorem due to Ambrosetti and Rabinowitz [19], $$ d \stackrel{\rm{def}}{=} \inf_{u \in W_0^{1,p}(\Omega)/\{0\}} \sup_{0 \leq \lambda } J(\lambda u).$$ It is well-known that the depth of the well \(d\) is a strictly positive constant, see [20, Theorem 4.2], and $$ d = \inf_{u \in \mathcal{N}} J(u).$$ In fact, in our problem, the solution of \(\dfrac{\operatorname{d}}{\operatorname{d}\!\lambda} J(\lambda u)=0\) is $$ \lambda_{\ast} =\left[ \dfrac{\dfrac{1}{2}\displaystyle\int_{\Omega}\left(g\ast|\nabla u|^2\right)(t)\operatorname{d}\!x}{\|\nabla u\|_{p}^{p}} \right]^{\dfrac{1}{p-1}}. $$ We have $$ \dfrac{\operatorname{d}^2}{\operatorname{d}\!\lambda^2} J(\lambda u) = (p-1)\lambda^{p-2} \|\nabla u\|_{p}^{p} >0, $$ and then \(\lambda_{\ast}\) is a global minimum. For \(p\geq2\), \(J(\lambda_\ast u) < 0\), so we introduce the sets $$\mathcal{W}_1=\{u\in W_0^{1,p}(\Omega); J(\lambda_\ast u) \leq J(\lambda u)\leq0\}$$ and $$\mathcal{W}_2=\{u\in W_0^{1,p}(\Omega); 0< J(\lambda u)\}.$$ The potential well is defined by \( \mathcal{W} = \{ u \in W_0^{1,p}\,\,:\,\, J(u)< d\} \cup \{0\}\) and partition it into two sets \begin{eqnarray*} V= \left\{ u \in \mathcal{W}\,\,:\,\, \dfrac{1}{2}\displaystyle\int_{\Omega} \left(g\ast|\nabla u|^2\right)(t) \operatorname{d}\!x \leq \|\nabla u\|_{p}^p \right\} \cup \{0\}, \end{eqnarray*} \begin{eqnarray*} W= \left\{ u \in \mathcal{W}\,\,:\,\, \|\nabla u\|_{p}^p < \dfrac{1}{2}\displaystyle\int_{\Omega} \left(g\ast|\nabla u|^2\right)(t) \operatorname{d}\!x \right\}. \end{eqnarray*} We will refer to \(V\) as the "good" part of the well and \(W\) as the "bad" part of the well. Then we define by \(V\) the set of stability for the problem (2)-(4).

3. Preliminaries

We introduce the symbols ''\(\Box\)'' and ''\(\diamond\)'' which denote the following convolutions respectively \begin{align*} (g\Box h)(t)&\stackrel{\rm{def}}{=} \displaystyle\int_0^{t} g(t-s)|h(t)-h(s)|^2 \operatorname{d}\!s,\\ (g\diamond h)(t)&\stackrel{\rm{def}}{=} \displaystyle\int_0^{t} g(t-s)\left(h(t)-h(s)\right) \operatorname{d}\!s. \end{align*} We state two basic results, see [19], that will be used in the sequel.

Lemma 3.1. For any functions \(g,h\in C\left([0,\infty],\mathbb{R}\right)\) we have that \begin{align*} (g\ast h)(t) &= \left(\displaystyle\int_{0}^{t} g(s)\operatorname{d}\!s\right)h(t) - (g\diamond h)(t)\\ \left|(g\diamond h)(t)\right|^2 &\leq \left(\displaystyle\int_{0}^{t} |g(s)|\operatorname{d}\!s\right)(|g|\Box h)(t) \end{align*}

Lemma 3.2. For \(g,h\in C\left([0,\infty],\mathbb{R}\right)\) we have $$ 2(g\ast h)(t)h'(t) = (g'\Box h)(t) - g(t)|h(t)|^2 + \dfrac{\operatorname{d}}{\operatorname{d}\!t} \left[\left(\displaystyle\int_{0}^{t} g(s)\operatorname{d}\!s\right)|h(t)|^2 - (g\Box h)(t)\right] $$

From now and on, the function \(g\) is of exponential type, this is, \(g>0\) and \(\exists~c_i>0\), (\(i=0,1\)) such that \begin{eqnarray*} -c_0g(t) \leq g'(t) \leq -c_1g(t)\,\,\mbox{ and }\,\, 1 - \displaystyle\int_{0}^{\infty} g(t) \operatorname{d}\!t < \infty. \label{eq:HG3} \end{eqnarray*} The energy of the problem (2)-(4) is defined as $$ E(t) \stackrel{\rm{def}}{=} \dfrac{1}{2}|u'(t)|_2^2 + \dfrac{1}{p}\|\nabla u(t)\|_p^p + \dfrac{1}{2}\left(1-\displaystyle\int_{0}^{t} g(s)\operatorname{d}\!s \right)|\nabla u(t)|_2^2 + \dfrac{1}{2}(g\Box\nabla u)(t). $$ Now we present the result of P. Martinez [19] on decay rate estimates for dissipative system that will used in the section 6.

Lemma 3.3. Let \(E\colon \mathbb{R}_{+} \to \mathbb{R}_{+}\) be a non increasing function and \(\phi: \mathbb{R}_{+} \to \mathbb{R}_{+}\) an increasing function such that $$ \phi(0)=0 ~~\text{and}~~ \phi(t) \to +\infty \text{ as } t\to+\infty. $$ Assume that there exist \(q\geq0\) and \(A>0\) such that $$ \displaystyle\int_{S}^{+\infty} E(t)^{q+1}\phi'(t)\operatorname{d}\!t \leq AE(S), ~~ 0\leq S < +\infty. $$ Then we have $$ E(t) \leq cE(0)\left(1+\phi(t)\right)^{\frac{-1}{q}}, ~~\forall~t\geq0 \text{ if } q>0 $$ and $$ E(t) \leq cE(0) e^{-w\phi(t)}, ~~ \forall~t>0 \text{ if } q=0, $$ where \(c\) and \(w\) are positive constants independent of the initial energy \(E(0)\).

The energy of the problem (2)-(4) is defined as $$ E(t) \stackrel{\rm{def}}{=} \dfrac{1}{2}|u'(t)|_2^2 + \dfrac{1}{p}\|\nabla u(t)\|_p^p + \dfrac{1}{2}\left(1-\displaystyle\int_{0}^{t} g(s)\operatorname{d}\!s \right)|\nabla u(t)|_2^2 + \dfrac{1}{2}(g\Box\nabla u)(t). $$

4. The Galerkin basis

Denote by $$\mathcal{K}_j=\{ K \subset\{u \in L^p(\Omega)\,:\, ||u||_p = 1\}\,:\, K\, \mbox{is compact, symmetric and}\, \gamma(K) \geq j\},$$ where \(\gamma(G) = \inf \{ m\,:\,\exists\,\phi\,:\,G \rightarrow \mathbb{R}/\{0\}, \phi \mbox{ odd continuous function}\}\) denotes the Krasnoselski genus. In [22] it is proved that $$ \lambda_j = \inf_{G \in \mathcal{K}_j}\,\sup_{u \in G}\, ||\nabla u||_p^p $$ is a sequence of eigenvalue of the \(p\)-Laplacian. \(-\Delta_p \colon W_0^{1,p}(\Omega) \to W^{-1,p'}(\Omega)\) is a monotone, coercive and hemicontinuous operator on \(W_0^{1,p}(\Omega)\). Minty-Browder theorem, see [19], guarantees the existence of a basis \((w_j)_{j=1}^{\infty}\) for \(W_0^{1,p}(\Omega)\) given by the solution of the stationary problem $$\begin{array}{rcl} -\Delta_p w_j &=& \lambda_j w_j, \\ w_j(0) &=& w_{0j}. \end{array} $$ Using
\begin{equation}\label{eq:05A} W_0^{1,p}(\Omega) \subset L^{2}(\Omega) \subset W^{-1,p'}(\Omega) \end{equation}
(7)
with continuous and dense injection for \(1< p< \infty\), see [21], this basis can be extended on \(L^2(\Omega)\) as a basis of Galerkin to Laplacian operator. In fact, from Sobolev immersion we have $$ W_0^{\nu,q}(\Omega) \hookrightarrow W_0^{\nu - k,q_k}(\Omega), \,\,\frac{1}{q_k} = \frac{1}{q} - \frac{k}{n}.$$ Choosing \(q_k=p\), \(\nu-k=1\) and \(q =2\) we get $$\nu= 1 + \frac{n}{2} - \frac{n}{p} = 1 + \frac{n(p-2)}{2p} > 0$$ and we obtain a Hilbert Space \( H_0^\nu(\Omega)\) such that $$ H_0^\nu(\Omega) = W_0^{\nu,2}(\Omega) \hookrightarrow W_0^{1,p}(\Omega). $$ Let \(s\) an integer for which \(s > \nu\). We have $$ H_0^s(\Omega) \hookrightarrow W_0^{1,p}(\Omega) \hookrightarrow H_0^1(\Omega) \hookrightarrow L^2(\Omega). $$ By Rellich-Kondrachov theorem, \( H_0^1(\Omega) \hookrightarrow L^2(\Omega) \) is compact, so the immersion \( H_0^s(\Omega) \hookrightarrow L^2(\Omega) \) is also. From spectral theory, there exists a operator defined by $$ \{H_0^s(\Omega),\,L^2(\Omega),\,(\!( \cdot, \cdot)\!)_{H_0^s(\Omega)}\}$$ and a sequence of eigenvectors \((v_j)_{j\in\mathbb{N} }\) of this operator, such that $$ (\!( v_{j},v)\!)_{H_0^s(\Omega)} = \lambda_{j} (v_{j},v), \,\,\mbox{for all}\,\, v \in H_0^s(\Omega) $$ with \( \lambda_{j} >0, \, \lambda_{j} \leq \lambda_{j+1},\,\,\,\mbox{and}\,\,\, \lambda_{j} \rightarrow + \infty \,\,\,\mbox{as}\,\,\, j \rightarrow + \infty \). Moreover \((v_j)_{j\in\mathbb{N}}\) is a complete orthonormal system in \(L^2(\Omega)\) and \(\left(w_j=\frac{v_j}{\sqrt{\lambda_j}}\right)_{j\in\mathbb{N}}\) is a complete orthonormal system in \(H_0^s(\Omega)\). Then \(( w_j)_{j\in\mathbb{N}}\) yields a ``Galerkin basis'' for both \(W_0^{1,p}(\Omega)\) and \(L^2(\Omega)\).

5. Global Solution

5.1 Existence

Theorem 5.1. Given \(u_0 \in V\), \(u_1 \in L^2(\Omega)\) there exists a function $$u\colon \Omega\times(0,T) \to \mathbb{R}$$ such that $$ \begin{array}{l} u \in L^{\infty}(0,T;W_0^{1,p}(\Omega)),\qquad u' \in L^{\infty}(0,T;L^{2}(\Omega)),\\ u(x,0)=u_0(x), ~~ u_t(x,0)=u_1(x) ~~ a. e. \,\,in \,\,\Omega,\\ \dfrac{\operatorname{d}}{\operatorname{d}\!t}(u_t,v) + \langle -\Delta_p u,v \rangle_p + (-\Delta u,v) + (g\ast\Delta u,v)=0, ~~ \forall ~v\in W_0^{1,p}(\Omega) \text{ in } D'(0,T). \end{array} $$

Proof. Now, for each \(m\in\mathbb{N}\), let us put \(V_m=\operatorname{Span}\{w_1,w_2,\ldots,w_m\}\). We search for a function \(u_m(t) = \displaystyle\sum_{j=1}^{m} k_{jm}(t) w_j\) such that for any \(v\in V_m\), \(u_m(t)\) satisfies the approximate equation

\begin{equation}\label{eq:06} (u''_m(t),v) + \langle-\Delta_p u_m(t) ,v \rangle_p + (-\Delta u_m(t),v) + (g\ast \Delta u_m(t) ,v) = 0, \end{equation}
(8)
with the initial conditions \(u_m(0)=u_{0m}\) and \(u'_m(0) = u_{1m}\), where \(u_{0m}\) e \(u_{1m}\) are closed in \(V_m\) so that $$ w_{0m} \to u_0 ~\in~ W_0^{1,p}(\Omega) ~~ \mbox{ and } ~~ u_{1m} \to u_1 ~in~ L^{2}(\Omega). $$ Putting \(v=w_i\), \(i=1,2,\ldots,m\), and using $$ \begin{array}{rcl} u''_m(t) &=& \displaystyle\sum_{j=1}^{m} k''_{jm}(t) w_j(x), \\ \Delta u_m(t) &=& \displaystyle\sum_{j=1}^{m} k_{jm}(t) \Delta w_j(x), \\ \Delta_p u_m(t) &=& \displaystyle\sum_{j=1}^{m} k_{jm}(t) \Delta_p w_j(x), \\ (g\ast\Delta u_m)(t) &=& \displaystyle\sum_{j=1}^{m} (g\ast k_{jm})(t) \Delta w_j(x), \end{array} $$ we observe that (8) is a system of ODEs in the variable \(t\) and has a local solution \(u_m(t)\) in a interval \([0,t_m)\), by virtue of Carathéodory's theorem, see [24]. In the next step we obtain priori estimates for the solution \(u_m(t)\) so that it can be extended to the whole interval \([0,T]\), \(T>0\).
Priori Estimates: We replace \(v=u'_m(t)\) in the approximate equation (8) and we get
\begin{equation}\label{eq:07} \left(u'_m(t), u'_m(t)\right) - \langle \Delta_p u_m(t), u'_m(t) \rangle_p - \left(\Delta u_m(t),u'_m(t)\right) + \left(g\ast\Delta u_m(t) , u'_m(t)\right) = 0 \end{equation}
(9)
Let \(\theta \in D(0,t_m)\). We denote by \(\langle \,\cdot\,,\,\cdot\,\rangle\) the duality pairing between \(D'\) and \(D\). So we have,
\begin{equation} \left\langle (u''_m(t), u'_m(t)),\theta\right\rangle = \left\langle \dfrac{1}{2}\dfrac{\operatorname{d}}{\operatorname{d}\!t}|u'_m(t)|_2^2 ,\theta \right\rangle \label{eq:08} \end{equation}
(10)
\begin{equation} \left\langle \langle -\Delta_p u_m(t), u'_m(t)\rangle_p,\theta\right\rangle = \left\langle \dfrac{1}{p}\dfrac{\operatorname{d}}{\operatorname{d}\!t}\|\nabla u_m(t)\|_p^p ,\theta \right\rangle \label{eq:09} \end{equation}
(11)
\begin{equation} \left\langle (-\Delta u_m(t), u'_m(t)),\theta\right\rangle = \left\langle \dfrac{1}{2}\dfrac{\operatorname{d}}{\operatorname{d}\!t}|\nabla u_m(t)|_2^2 ,\theta \right\rangle \label{eq:10} \end{equation}
(12)
Now, note that $$\begin{array}{rcl} \left(g\ast\Delta u_m(t) , u'_m(t)\right) = -\left((g\ast\nabla u_m)(t) , \nabla u'_m(t)\right). \end{array}$$ By Lemma 3.2 we have $$\begin{array}{rcl} 2\left( (g\ast\nabla u_m)(t),\nabla u'_m(t) \right)\!\!\!\! &=&\!\!\!\! \displaystyle\int_{\Omega} (g'\Box\nabla u_m)(t) \operatorname{d}\!x - g(t) \displaystyle\int_{\Omega} |\nabla u_m(t)|^2 \operatorname{d}\!x \\[2mm] &&- \displaystyle\int_{\Omega} \dfrac{\operatorname{d}}{\operatorname{d}\!t}\left[(g\Box \nabla u_m)(t)- \left(\displaystyle\int_{0}^{t} g(s) \operatorname{d}\!s\right) |\nabla u_(t)|^2 \right]\operatorname{d}\!x. \end{array} $$ Then,
\begin{eqnarray} \langle (g\ast \Delta u_m(t),u'_m(t)),\theta \rangle \!\!\!\! &=& \!\!\!\! \left\langle -\dfrac{1}{2} \displaystyle\int_{\Omega} (g'\Box\nabla u_m)(t) \operatorname{d}\!x + \dfrac{1}{2}g(t)|\nabla u_m(t)|_2^2 \right. \nonumber\\ &&\!\left.+\dfrac{1}{2}\dfrac{\operatorname{d}}{\operatorname{d}\!t} \displaystyle\int_{\Omega} \!\!(g\Box\nabla u_m)(t) \operatorname{d}\!x - \left( \displaystyle\int_{0}^{t} \!\!\!g(s)\operatorname{d}\!s\right) |\nabla u_m(t)|_2^2 ,\theta \right\rangle. \nonumber\\ \label{eq:11} \end{eqnarray}
(13)
Replacing (10), (11), (12), (13) in (9) we obtain in \(D'(0,t_m)\)
\begin{align} &\dfrac{\operatorname{d}}{\operatorname{d}\!t}\left\{\dfrac{1}{2}|u'_m(t)|_2^2 + \dfrac{1}{p}\|\nabla u_m(t)\|_p^p + \dfrac{1}{2}(g\Box\nabla u_m)(t) + \dfrac{1}{2}\left(1-\displaystyle\int_{0}^{t} g(s)\operatorname{d}\!s \right) |\nabla u_m(t)|_2^2 \right\}\nonumber\\ =&\dfrac{1}{2} \displaystyle\int_{\Omega} (g'\Box\nabla u_m)(t) \operatorname{d}\!x -\dfrac{1}{2}g(t) |\nabla u_m(t)|_2^2 \label{eq:12} \end{align}
(14)
The approximate energy $$ E_m(t) = \dfrac{1}{2}|u'_m(t)|_2^2 + \dfrac{1}{p}\|\nabla u_m(t)\|_p^p + \dfrac{1}{2}\left(1-\displaystyle\int_{0}^{t} g(s)\operatorname{d}\!s \right) |\nabla u_m(t)|_2^2 + \dfrac{1}{2}(g\Box\nabla u_m)(t) $$ satisfies $$ \dfrac{\operatorname{d}}{\operatorname{d}\!t} E_m(t) \leq -\dfrac{c_1}{2}\displaystyle\int_\Omega (g\Box\nabla u_m)(t) \operatorname{d}\!x - \dfrac{1}{2}g(t)|\nabla u_m(t)|_2^2. $$ Then \(E_m(t) \leq E_m(0)\). Due to convergence of initial data, there exists a constant \(c>0\) independent of \(t\) and \(m\) such that \(E_m(t)\leq c\). With this estimate we can extend the aproximate solutions \(u_m(t)\) to the interval \([0,T]\), see [25], and we have
\begin{equation} u_m(t) \mbox{is bounded in} L^{\infty}(0,T;W_0^{1,p}(\Omega)),\label{eq:13} \end{equation}
(15)
\begin{equation} u'_m(t) \mbox{is bounded in} L^{\infty}(0,T;L^{2}(\Omega)),\label{eq:14} \end{equation}
(16)
\begin{equation} - \Delta_p u_m(t) \mbox{is bounded in} L^{\infty}(0,T;W^{-1,p'}(\Omega)).\label{eq:15} \end{equation}
(17)
From (15) and Lemma 3.1 we deduce
\begin{equation}\label{eq:16} (g\ast \nabla u_m)(t) \mbox{ is bounded in } L^{\infty}(0,T;L^{2}(\Omega)). \end{equation}
(18)
Passage to the Limit: From (15), (16), (17) going to the subsequence if necessary, there exists \(u\) such that
\begin{equation} u_m \rightharpoonup u ~ \mbox{ weakly star in } L^{\infty}(0,T;W_0^{1,p}(\Omega)) \label{eq:17} \end{equation}
(19)
\begin{equation} u'_m \rightharpoonup u' ~ \mbox{ weakly star in } L^{\infty}(0,T;L^{2}(\Omega)) \label{eq:18} \end{equation}
(20)
\begin{equation} g\ast \nabla u_m \rightharpoonup g\ast \nabla u ~ \mbox{ weakly star in } L^{\infty}(0,T;L^{2}(\Omega)) \label{eq:19} \end{equation}
(21)
and in view of (17) there exists \(\mathcal{X}\) such that
\begin{equation}\label{eq:20} -\Delta_p u_m(t)\to \mathcal{X} \mbox{ weakly in } L^{\infty}(0,T;W^{-1,p'}(\Omega)). \end{equation}
(22)
With these convergence we can pass to the limit in the approximate equation (8) see [26, 27], and then $$ \dfrac{\operatorname{d}}{\operatorname{d}\!t}(u'(t),v) + \langle \mathcal{X}(t),v\rangle_p + (-\Delta u(t), v) + ((g\ast \nabla u)(t),v) = 0, $$ for all \(v\in W_0^{1,p}(\Omega)\) in the sense of distributions. For \(x,y\in\mathbb{R}\) and \(p\geq 2\), consider the elementary inequalities
\begin{equation} \left| |x|^\frac{p-2}{2}x - |y|^\frac{p-2}{2}y \right| \leq C\left(|x|^\frac{p-2}{2}+|y|^\frac{p-2}{2}\right)|x-y|, \label{eq:19AA} \end{equation}
(23)
\begin{equation} \left| |x|^{p-2}x - |y|^{p-2}y \right| \leq C\left(|x|^{\frac{p-2}{2}}+|y|^{\frac{p-2}{2}}\right)\left| |x|^{\frac{p-2}{2}}x - |y|^{\frac{p-2}{2}}y \right|. \label{eq:19BB} \end{equation}
(24)
The inequality (23) is a consequence of the mean value theorem and (24) can be found in [28]. As in [29] applying (23), (24) and Hölder generalized inequality with $$\dfrac{p-2}{4p}+ \dfrac{p-2}{4p} + \dfrac{1}{2}+\dfrac{1}{p}=1$$ we deduce for all \(v \in W_0^{1,p}(\Omega)\) \begin{eqnarray*} \left| \int_{0}^{T} \langle -\Delta_p u_m(t),v\rangle_p - \langle -\Delta_p u(t),v \rangle_p \operatorname{d}\!t \right| \leq c \int_{0}^T |\nabla u_m(t)- \nabla u(t)|_{2} \operatorname{d}\!t. \end{eqnarray*} Now we are going to obtain an estimate for \(u''_m(t)\). Since our Galerkin basis was taken in the Hilbert space \(L^2(\Omega)\) we can use the standard projection arguments as described in Lions [26]. Then from the approximate equation and the estimates (15)-(17) we get
\begin{eqnarray} u''_m(t) &\mbox{is bounded in}& L^{\infty}(0,T;W^{-1,q}(\Omega)).\label{eq:convergence} \end{eqnarray}
(25)
Applying the Lions-Aubin compactness we get from (19), (20) and (25),
\begin{equation} u_m(t) \rightarrow u(t) ~ \mbox{ strongly in } L^{2}(0,T;L^{2}(\Omega)), \label{eq:21B} \end{equation}
(26)
\begin{equation} u'_m(t) \rightarrow u'(t) ~ \mbox{ strongly in } L^{2}(0,T;L^{2}(\Omega)). \label{eq:21BB} \end{equation}
(27)
Using (26) we get that \(u_m(t) \rightarrow u(t) ~ \mbox{ almost everewhere in } \Omega\times (0,T)\) and we have,
\begin{eqnarray}\label{Limit2} -\Delta_p u_m(t) \to -\Delta_p u(t) \mbox{ weakly in } L^{\infty}(0,T;W^{-1,p'}(\Omega)). \end{eqnarray}
(28)
From (22), (28) and uniqueness of the limit we conclude that \(\mathcal{X}(t) = -\Delta_p u(t)\). The verification of the initial data is a routine procedure. The prove of existence is complete.

5.2. Uniqueness

Let \(u\) and \(v\) be solutions of (2)-(4) such that $$u(x,0)=u_0=v(x,0)\quad\mbox{and}\quad u_t(x,0) = u_1 =v_t(x,0).$$ Denoting \(w=u-v\) we have
\begin{equation} u_{tt} - \Delta w \!\!=\!\! \Delta_p u - \Delta_p v - g\ast \Delta w, \quad\mbox{in}\quad \Omega \times [0,\infty), \label{eq:01-uni} \end{equation}
(29)
\begin{equation} w(x,0)\!\! =\!\! 0, ~~ u_t(x,0) = 0, ~~ x\in\Omega, \label{eq:02-uni} \end{equation}
(30)
\begin{equation} w(x,t)\!\!=\!\! 0 \quad\mbox{on}\quad \partial \Omega \times [0,\infty). \label{eq:03-uni} \end{equation}
(31)
We will use the Vishik-Ladyenskaya method [30]. Consider for each \(\eta \in [0,T]\) the following function
\begin{equation}\label{eq:03A-uni} \psi(x,t) = \left\{ \begin{array}{ccc} -\displaystyle\int_t^\eta w(x,\xi)\operatorname{d}\!\xi &,& 0\leq t < \eta,\\[2mm] 0 &,& \eta \leq t \leq T. \end{array} \right. \end{equation}
(32)
Then,
\begin{equation}\label{eq:04-uni} \psi_t(x,t) = \left\{ \begin{array}{ccc} w(x,t) &,& 0\leq t < \eta,\\[2mm] 0 &,& \eta \leq t \leq T. \end{array} \right. \end{equation}
(33)
As \(w\in L^{\infty}(0,T; W_0^{1,p}(\Omega))\), \(w_t \in L^{\infty}(0,T;L^2(\Omega))\) we have
\begin{equation}\label{eq:05-uni} \psi, ~ \psi_t \in L^{\infty}(0,T;L^2(\Omega)). \end{equation}
(34)
Mutiplying (29) by \(\psi\) and performing integration on \(\Omega\) $$ (w_{tt},\psi) + (\nabla w, \nabla \psi) = \langle \Delta_p u-\Delta_p v, \psi \rangle_p - (g\ast \Delta w ,\psi). $$ Integrating in \([0,\eta]\) and taking into account that \(\psi(x,t)\equiv 0\) for all \(t\in[\eta,T]\), we have $$ \displaystyle\int_0^\eta (w_{tt},\psi)\operatorname{d}\!t + \displaystyle\int_0^\eta (\nabla w, \nabla \psi)\operatorname{d}\!t = \displaystyle\int_0^\eta \langle \Delta_p u-\Delta_p v, \psi \rangle_p \operatorname{d}\!t - \displaystyle\int_0^\eta (g\ast \Delta w),\psi) \operatorname{d}\!t. $$ As \(\psi(\eta)=w(0)=0\) we get $$ -\displaystyle\int_0^\eta (w_{t},\psi_t)\operatorname{d}\!t + \displaystyle\int_0^\eta (\nabla w, \nabla\psi) \operatorname{d}\!t = \displaystyle\int_0^\eta \langle \Delta_p u - \Delta_p v, \psi\rangle_p \operatorname{d}\!t - \displaystyle\int_0^\eta ((g\ast \Delta w)(t),\psi) \operatorname{d}\!t. $$ From (30), (31) and (32) $$ -\displaystyle\int_0^\eta (w_{t},w)\operatorname{d}\!t +\displaystyle\int_0^\eta (\nabla \psi_{t},\nabla \psi)\operatorname{d}\!t = \displaystyle\int_0^\eta \langle \Delta_p u - \Delta_p v, \psi\rangle_p \operatorname{d}\!t - \displaystyle\int_0^\eta (g\ast \Delta w,\psi) \operatorname{d}\!t. $$ That is $$ -\dfrac{1}{2}\displaystyle\int_0^\eta \dfrac{\operatorname{d}}{\operatorname{d}\!t} |w|_2^2 \operatorname{d}\!t + \dfrac{1}{2}\displaystyle\int_0^\eta \dfrac{\operatorname{d}}{\operatorname{d}\!t} |\nabla \psi|_2^2 \operatorname{d}\!t = \displaystyle\int_0^\eta \langle \Delta_p u - \Delta_p v,\psi \rangle_p \operatorname{d}\!t - \displaystyle\int_0^\eta ((g\ast\Delta w)(t),\psi) \operatorname{d}\!t, $$ that implies
\begin{equation}\label{eq:06-uni} -\dfrac{1}{2}|w(\eta)|_2^2 - \dfrac{1}{2}|\nabla\psi(0)|_2^2 \leq \displaystyle\int_0^\eta \langle \Delta_p u - \Delta_p v,\psi \rangle_p \operatorname{d}\!t - \displaystyle\int_0^\eta ((g\ast\Delta w)(t),\psi) \operatorname{d}\!t \end{equation}
(35)
As before, applying (23), (24) and Hölder generalized inequality with $$\dfrac{p-2}{4p}+ \dfrac{p-2}{4p} + \dfrac{1}{p}+\dfrac{1}{2}=1$$ we obtain
\begin{eqnarray}\label{eq:07-uni} |\langle \Delta_p u - \Delta_p v, \psi\rangle_p| \leq c|\nabla \psi|_2. \end{eqnarray}
(36)
From (34) and continuous and dense injection (7) we deduce that \(g\ast\Delta w \in L^{2}(0,T;L^2(\Omega))\). Then
\begin{equation}\label{eq:08-uni} \left|\displaystyle\int_\Omega (g\ast\Delta w)(t)\psi \operatorname{d}\!x \right| \leq \left( \displaystyle\int_\Omega |g\ast\Delta w|^2 \operatorname{d}\!x \right)^{\frac{1}{2}} \left( \displaystyle\int_\Omega |\psi|^2 \operatorname{d}\!x \right)^{\frac{1}{2}} \leq c|\psi|_2. \end{equation}
(37)
From (35), (36), (37), Poincaré and Cauchy-Schwarz inequalities we deduce
\begin{equation}\label{eq:09-uni} \dfrac{1}{2}|w|_2^2 + \dfrac{1}{2}|\nabla \psi(0)|_2^2 \leq c \displaystyle\int_0^\eta |\nabla \psi|^2_2\operatorname{d}\!t. \end{equation}
(38)
Now we introduce \(w_1(x,t) = \displaystyle\int_0^t w(x,\xi)\operatorname{d}\!\xi\), for all \(t\in [0,\xi)\) we have
\begin{equation}\label{eq:10-uniA} \psi(x,t) = -\displaystyle\int_0^\eta w(x,\xi) \operatorname{d}\!\xi = -\displaystyle\int_0^\eta w(x,\xi) \operatorname{d}\!\xi + \displaystyle\int_0^t w(x,\xi) \operatorname{d}\!\xi = w_1(x,t) - w_1(x,\eta), \end{equation}
(39)
and then
\begin{equation}\label{eq:10-uni} \psi(x,0) = w_1(x,0) - w_1(x,\eta) = -w_1(x,\eta). \end{equation}
(40)
From (38), (39) and (40) we obtain $$ \dfrac{1}{2}|\nabla w_1|_2^2 \leq c \displaystyle\int_0^\eta |\nabla w_1|_2^2 \operatorname{d}\!t. $$ By Gronwall's inequality we conclude \( |\nabla w_1|_2^2 \leq 0\). By (39) we deduce that \(\nabla \psi = 0\) in \(L^2(\Omega)\) for all \(t\in[0,T]\). Finally follows from (38) that \( |w|_2^2 \leq 0\) and then \(u=v\) in \(L^2(\Omega)\) for all \(t\in[0,T]\).

6. Asymptotic Behaviour

Theorem 6.1. Let \(u\) be a solution of (2)-(4) with initial data \(u_0 \in V\), \(u_1 \in L^2(\Omega)\). For \(\phi\colon \mathbb{R}_+ \to \mathbb{R}_+\) a increasing \(C^2\) function such that \(\phi(0)=0\) and \(\phi(t) \xrightarrow{t\to+\infty} +\infty\) we have for \(q>0\)

\begin{equation}\label{eq:ast} E(t) \leq c E(0)(1+\phi(t))^{\frac{-1}{q}}, ~~ \forall~t>0 \end{equation}
(41)
where \(c\) is a positive constant independent of the initial energy \(E(0)\).

Proof. We will use Lemma 3.3 due to P. Martinez [16] based on a new inequality that generalizes a result of Haraux [17]. For the goal we start the proof of (41) multiplying (2) by \(E^q(t)\phi'(t)u\) and so we set $$ \displaystyle\int_{0}^{T} E^q\phi'\displaystyle\int_\Omega u(u_{tt}-\Delta_p u -\Delta u + g\ast \Delta u)\operatorname{d}\!x \operatorname{d}\!t =0, $$ from where we obtain by straight calculations

\begin{eqnarray} 2\displaystyle\int_{S}^{T} E^{q+1}\phi'\operatorname{d}\!t &\leq& -\left[E^q\phi' \displaystyle\int_\Omega uu_t\operatorname{d}\!x\right]_S^T \nonumber\\ && + 4\displaystyle\int_{S}^{T}\left[\left(qE'E^{q-1}\phi'+ E^q\phi''\right)\displaystyle\int_\Omega uu_t\operatorname{d}\!x \right]\operatorname{d}\!t \nonumber\\ && + 4\displaystyle\int_{S}^{T}\left[E^{q}\phi' \dfrac{1}{2} \displaystyle\int_\Omega |u_t|^2\operatorname{d}\!x \right]\operatorname{d}\!t + \displaystyle\int_{S}^{T}\left[E^{q}\phi' \dfrac{1}{2} \displaystyle\int_\Omega g\Box\nabla u\operatorname{d}\!x \right]\operatorname{d}\!t \nonumber\\ && + \displaystyle\int_{S}^{T}\left[E^{q}\phi' \dfrac{1}{2} \displaystyle\int_\Omega \displaystyle\int_0^t g(t-s)|\nabla u(s)|^2 \operatorname{d}\!s \operatorname{d}\!x \right]\operatorname{d}\!t \nonumber\\ && + \displaystyle\int_{S}^{T}\left[E^{q}\phi \dfrac{1}{2} \displaystyle\int_\Omega |\nabla u|^2 \operatorname{d}\!x \displaystyle\int_0^T g(t-s)\operatorname{d}\!s \right]\operatorname{d}\!t. \label{eq:3.1} \end{eqnarray}
(42)
In the stability set \(V\) we have
\begin{eqnarray} \dfrac{1}{2}\displaystyle\int_0^t g(t-s)|\nabla u(s)|^2 \operatorname{d}\!s \leq \dfrac{1}{p}\|\nabla u(s) \|_{p}^p. \label{HV} \end{eqnarray}
(43)
Replacing (43) in (42) we obtain
\begin{eqnarray}\label{eq:3.2} \displaystyle\int_S^T E^{q+1} \phi' \operatorname{d}\!t &\leq& -\left[ E^q \phi' \displaystyle\int_\Omega uu_t \operatorname{d}\!x \right]_S^T + \displaystyle\int_0^T \left[ \left(qE'E^{q-1}\phi' + E^q\phi''\right) \displaystyle\int_\Omega uu_t \operatorname{d}\!x \right]\operatorname{d}\!t \nonumber\\ && + \dfrac{3}{2} \displaystyle\int_S^T E^q \phi'\displaystyle\int_\Omega |u_t|^2 \operatorname{d}\!x\operatorname{d}\!t. \end{eqnarray}
(44)
Now, we will estimate each term on the right side of (44). Applying the same argument as in [6] we deduce
\begin{equation}\label{eq:3.3} \left|\left[ E^q\phi'\displaystyle\int_\Omega uu_t \operatorname{d}\!x \right]_S^T\right| \leq c E(s), ~~ \forall~t \geq S \end{equation}
(45)
and
\begin{equation}\label{eq:3.4} \left|\displaystyle\int_S^T \left[\left( qE'E^{q-1}\phi' + E^q\phi''\right) \displaystyle\int_\Omega uu_t \operatorname{d}\!x \right]\operatorname{d}\!t \right| \leq c E(s), ~~ \forall~t \geq S. \end{equation}
(46)
The estimate of \(\displaystyle\int_S^T E^q\phi'\displaystyle\int_\Omega |u_t|^2 \operatorname{d}\!x \operatorname{d}\!t\) is quite delicate. Let \(\sigma\colon\mathbb{R}_+\to\mathbb{R}_+\) be a strictly positive function such that \(\displaystyle\int_0^\infty \sigma(t)\operatorname{d}\!\tau =+\infty\). \(\phi(t) = \displaystyle\int_0^t \sigma(\tau)\operatorname{d}\!\tau\) satisfies \(\phi(0)=0\) and \(\phi(t) \xrightarrow{t\to+\infty} +\infty\). Consider \(\rho(t,u)\leq -E'(t)\). According to the [6] for all \(0< S< T\) ande \(l< m+1\) $$\begin{array}{rcl} \displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega |u_t|^l \operatorname{d}\!x\operatorname{d}\!t &\leq& c\displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega \dfrac{1}{\sigma(t)}u_t\rho(t,u) \operatorname{d}\!x \operatorname{d}\!t \\ && + c'\displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega \left( \dfrac{1}{\sigma(t)}u_t \rho(t,u) \right)^{\frac{l}{m+1}} \operatorname{d}\!x \operatorname{d}\!t \\ &\leq& c\displaystyle\int_S^T E^q \dfrac{\phi'}{\sigma(t)}(-E') \displaystyle\int_\Omega |u'|\operatorname{d}\!x \operatorname{d}\!t \\ && + c'\displaystyle\int_S^T E^q \phi' \sigma^{-\frac{l}{m+1}}(t)(-E')^{\frac{l}{m+1}} \displaystyle\int_\Omega \left|u'\right|^{\frac{l}{m+1}} \operatorname{d}\!x \operatorname{d}\!t. \end{array} $$ Applying Hölder inequality, using \(l< m+1\), \(|u'|_2^2 < c\) we get $$\begin{array}{rcl} \displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega |u_t|^l \operatorname{d}\!x\operatorname{d}\!t &\leq& c\displaystyle\int_S^T E^q \dfrac{\phi'}{\sigma(t)}(-E')\operatorname{d}\!t \\ &&+ c' \displaystyle\int_S^T E^q {\phi'}^{\frac{m+1-l}{m+1}}\left(\dfrac{\phi'}{\sigma(t)}\right)^{\frac{l}{m+1}}(-E')^{\frac{l}{m+1}}\operatorname{d}\!t. \end{array} $$ For fix and arbitrarily small \(\varepsilon>0\) (to be chosen later). By applying Young's inequality \(\dfrac{1}{\frac{m+1}{m+1-l}} + \dfrac{1}{\frac{m+1}{l}}=1\) we obtain $$ \begin{array}{rcl} \displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega |u'|^l \operatorname{d}\!x\operatorname{d}\!t &\leq& c \displaystyle\int_S^T E^q \dfrac{\phi'}{\sigma(t)} (-E')\operatorname{d}\!t \\ && + c'\dfrac{m+1-l}{m+1}\varepsilon^{\frac{m+1}{m+1-l}}\displaystyle\int_S^T E^{q\frac{m+1}{m+1-l}} \phi'\operatorname{d}\!t \\ && + c'\dfrac{l}{m+1} \displaystyle\int_\Omega (-E')\dfrac{\phi'}{\sigma(t)}\varepsilon^{-\frac{m+1}{l}}\operatorname{d}\!t. \end{array} $$ From where follows $$ \begin{array}{rcl} \displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega |u_t|^l \operatorname{d}\!x\operatorname{d}\!t &\leq& c E^q(s) + c'\dfrac{m+1-l}{m+1}\varepsilon^{\frac{m+1}{m+1-l}}\displaystyle\int_S^T E^{q\frac{m+1}{m+1-l}} \phi'\operatorname{d}\!t \\ && + c'\dfrac{l}{m+1} \varepsilon^{-\frac{m+1}{l}}E(s). \end{array} $$ Making \(l=2\), \(\rho(t,u) = \dfrac{1}{2} \displaystyle\int_{\Omega}(g\Box\nabla u)(t)\operatorname{d}\!x \) e choosing \(q\) such that $$\dfrac{m+1}{m+1-l}=\dfrac{q+1}{q}$$ we obtain
\begin{equation}\label{eq:3.5} \displaystyle\int_S^T E^q \phi' \displaystyle\int_\Omega |u_t|^2 \operatorname{d}\!x\operatorname{d}\!t \leq cE(s) + c\dfrac{m-2}{m+1}\varepsilon^{\frac{m+1}{m-1}}\displaystyle\int_S^T E^{q+1} \phi'\operatorname{d}\!t + \dfrac{2c}{m+1} \varepsilon^{-\frac{m+1}{2}}E(s) \end{equation}
(47)
Now we deduce from (44), (45), (46) and (47) $$ \left( 1-c\dfrac{m-1}{m+1}\varepsilon^{\frac{m+1}{m-1}} \right) \displaystyle\int_S^T E^{q+1}\phi' \operatorname{d}\!t \leq c(\varepsilon)E(s). $$ Finally, choosing \(\varepsilon\) small enough we concludes $$ \displaystyle\int_S^T E^{q+1} \phi'\operatorname{d}\!t \leq cE(s), ~~ q>0 $$ and the proof is complete.

Concluding remarks

When \(p=2\) is well known that the equation (2) describes a homogeneous and isotropic viscoelastic solid and the genuine memory \(g\ast \Delta u\) induces a damping mechanism so the asymptotic stability is to be expected. For instance, for the nonhomogeneous problem with function \(f\) independent of time and source term the existence of a global attractor was proved in [31]. The problem with supercritical source and damping terms was studied in [32]. Employing the theory of monotone operators and nonlinear semigroups, combined with energy methods was established the existence of a unique local weak solution in the finite energy space. As follow-up work, recently in [33] was considered supercritical nonlinearities and was studied blow-up of solutions when the source is stronger than dissipation. For The case \(p>2\) the nonlinear equation (2) leads to a problem not previously considered. The highlight here was to prove the existence of solution and energy decay in the appropriate set of stability created from the Nehari manifold.

Acknowledgement

This project was been supported by PNPD/UFBA/CAPES (Brazil). The authors is grateful to the referees for his valuable comments and suggestions that helped improving the original manuscript.

Competing Interests

The authors declare that they have no competing interests.

References

  1. Adams, R. A.(1975). Sobolev Spaces , Academic Press. [Google Scholor]
  2. Brezis, H. (2010). Functional analysis, Sobolev spaces and partial differential equations . Springer Science & Business Media. [Google Scholor]
  3. Dreher, M. (2007). The wave equation for the \(p\)-Laplacian. Hokkaido mathematical journal, 36(1), 21-52. [Google Scholor]
  4. Greenberg, J. M., CAMY, R. C. M., & Mizel, V. J. (1968). On the Existence, Uniqueness, and Stability of Solutions of the Equation \(\sigma'(u_{x})u_{xx} + \lambda u_{xtx} = \rho_{0} u_{tt}\). Journal of Mathematics and Mechanics, 707-728. [Google Scholor]
  5. Ma, T. F., & Soriano, J. A. (1999). On weak solutions for an evolution equation with exponential nonlinearities. Nonlinear Analysis: Theory, Methods & Applications, 37(8), 1029-1038. [Google Scholor]
  6. Benaissa, A., & Mokeddem, S. (2007). Decay estimates for the wave equation of \(p\)-Laplacian type with dissipation of \(m\)-Laplacian type. Mathematical methods in the applied sciences, 30(2), 237-247. [Google Scholor]
  7. Rammaha, M., Toundykov, D., & Wilstein, Z. (2012). Global existence and decay of energy for a nonlinear wave equation with \(p\)-Laplacian damping. Discrete Contin. Dyn. Syst. , 32(12), 4361-4390. [Google Scholor]
  8. Pei, P., Rammaha, M. A., & Toundykov, D. (2015). Weak solutions and blow-up for wave equations of \(p\)-Laplacian type with supercritical sources. Journal of Mathematical Physics, 56(8), 081503.[Google Scholor]
  9. Ye, Y. (2007). Global existence and asymptotic behavior of solutions for a class of nonlinear degenerate wave equations. International Journal of Differential Equations, 2007, 19685. [Google Scholor]
  10. Biazutti, A. C. (1995). On a nonlinear evolution equation and its applications. Nonlinear analysis: Theory, Methods & Applications, 24(8), 1221-1234. [Google Scholor]
  11. Ang, D. D., & Pham Ngoc Dinh, A. (1988). Strong solutions of a quasilinear wave equation with nonlinear damping. SIAM Journal on Mathematical Analysis, 19(2), 337-347. [Google Scholor]
  12. d'Ancona, P., & Spagnolo, S. (1991). On the life span of the analytic solutions to quasilinear weakly hyperbolic equations. Indiana University Mathematics Journal,40, 71-99. [Google Scholor]
  13. Chueshov, I., & Lasiecka, I. (2006). Existence, uniqueness of weak solutions and global attractors for a class of nonlinear \(2D\) Kirchhoff-Boussinesq models. Discrete and Continuous Dynamical Systems, 15(3), 777-809. [Google Scholor]
  14. Zhijian, Y. (2003). Global existence, asymptotic behavior and blowup of solutions for a class of nonlinear wave equations with dissipative term. Journal of Differential Equations, 187(2), 520-540. [Google Scholor]
  15. Gao, H., & Ma, T. F. (1999). Global solutions for a nonlinear wave equation with the \(p\)-Laplacian operator. Electronic Journal of Qualitative Theory of Differential Equations, 1999(11), 1-13. [Google Scholor]
  16. Martinez, P. (1999). A new method to obtain decay rate estimates for dissipative systems. ESAIM: Control, Optimisation and Calculus of Variations, 4, 419-444.[Google Scholor]
  17. Haraux, A. (1985). Two remarks on dissipative hyperbolic problems. Research Notes in Mathematics, 122, 161-179. [Google Scholor]
  18. Nakao, M. (1978). A difference inequality and its application to nonlinear evolution equations. Journal of the Mathematical Society of Japan, 30(4), 747-762. [Google Scholor]
  19. Ambrosetti, A., & Rabinowitz, P. H. (1973). Dual variational methods in critical point theory and applications. Journal of functional Analysis, 14(4), 349-381. [Google Scholor]
  20. Willem, M. (1997). Minimax theorems(Progress in Nonlinear Differential Equations and Their Applications) (Vol. 24). Springer Science & Business Media. [Google Scholor]
  21. Alves, M. S., Raposo, C. A., Rivera, J. E. M., Sepúlveda, M., & Villagrán, O. V. (2010). Uniform stabilization for the transmission problem of the Timoshenko system with memory. Journal of Mathematical Analysis and Applications, 369(1), 323-345.[Google Scholor]
  22. Coffman, C. V. (1973). Lyusternik-Schnirelman theory and eigenvalue problems for monotone potential operators. Journal of Functional Analysis, 14(3), 237-252. [Google Scholor]
  23. Zeidler, E. Nonlinear Functional Analysis and its Applications II/B, Nonlinear Monotone Operators, 1990. Dumitru Motreanu Département de Mathématiques, Université de Perpignan, 66025.[Google Scholor]
  24. Coddington, E. A., & Levinson, N. (1955). Theory of ordinary differential equations. Tata McGraw-Hill Education.[Google Scholor]
  25. Hale, J. K. (1997). Ordinary Differential Equations. Dover Publications, INC.
  26. Lions, J. L. (1988). Lions, J. L. (1988). Contrôlabilité exacte, perturbations et stabilisation de systèmes distribués. Tome 1. RMA, 8. RMA, 8.
  27. Zhijian, Y. (2009). Longtime behavior for a nonlinear wave equation arising in elasto‐plastic flow. Mathematical Methods in the Applied Sciences, 32(9), 1082-1104.[Google Scholor]
  28. Domokos, A., & Manfredi, J. J. (2009). A second order differentiability technique of Bojarski-Iwaniec in the Heisenberg group. Functiones et Approximatio Commentarii Mathematici, 40(1), 69-74. [Google Scholor]
  29. Raposo, C. A., Ribeiro, J. O., & Cattai, A. P.(2018). Global solution for a thermoelastic system with -Laplacian. Applied Mathematics Letters, 86, 119--125.
  30. Vishik, M. I., & Ladyzhenskaya, O. A. (1956). Boundary value problems for partial differential equations and certain classes of operator equations. Uspekhi matematicheskikh nauk, 11(6), 41-97. [Google Scholor]
  31. Giorgi, C., Rivera, J. E. M., & Pata, V. (2001). Global attractors for a semilinear hyperbolic equation in viscoelasticity. Journal of Mathematical Analysis and Applications, 260(1), 83-99.[Google Scholor]
  32. Guo, Y., Rammaha, M. A., Sakuntasathien, S., Titi, E. S., & Toundykov, D. (2014). Hadamard well-posedness for a hyperbolic equation of viscoelasticity with supercritical sources and damping. Journal of Differential Equations, 257(10), 3778-3812. [Google Scholor]
  33. Guo, Y., Rammaha, M. A., & Sakuntasathien, S. (2017). Blow-up of a hyperbolic equation of viscoelasticity with supercritical nonlinearities. Journal of Differential Equations, 262(3), 1956-1979. [Google Scholor]
]]>
Identification of a diffusion coefficient in degenerate/singular parabolic equations from final observation by hybrid method https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/identification-of-a-diffusion-coefficient-in-degenerate-singular-parabolic-equations-from-final-observation-by-hybrid-method/ Fri, 14 Dec 2018 18:26:15 +0000 https://old.pisrt.org/?p=1655
OMA-Vol. 2 (2018), Issue 2, pp. 142–155 | Open Access Full-Text PDF
Khalid Atifi, El-Hassan Essoufi, Hamed Ould Sidi
Abstract:This paper deals with the determination of a coefficient in the diffusion term of some degenerate /singular one-dimensional linear parabolic equation from final data observations. The mathematical model leads to a non convex minimization problem. To solve it, we propose a new approach based on a hybrid genetic algorithm (married genetic with descent method type gradient). Firstly, with the aim of showing that the minimization problem and the direct problem are well posed, we prove that the solution's behavior changes continuously with respect to the initial conditions. Secondly, we chow that the minimization problem has at least one minimum. Finally, the gradient of the cost function is computed using the adjoint state method. Also we present some numerical experiments to show the performance of this approach.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Identification of a diffusion coefficient in degenerate/singular parabolic equations from final observation by hybrid method

Khalid Atifi\(^{1}\), El-Hassan Essoufi, Hamed Ould Sidi
Université Hassan Premier Faculté des sciences et techniques Département de Mathématiques et Informatique Laboratoire MISI Settat, Maroc.; (K.A & E.E & E.O.S)

\(^{1}\)Corresponding Author;  k.atifi.uhp@gmail.com

Copyright © 2018 Khalid Atifi, El-Hassan Essoufi, Hamed Ould Sidi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper deals with the determination of a coefficient in the diffusion term of some degenerate /singular one-dimensional linear parabolic equation from final data observations. The mathematical model leads to a non convex minimization problem. To solve it, we propose a new approach based on a hybrid genetic algorithm (married genetic with descent method type gradient). Firstly, with the aim of showing that the minimization problem and the direct problem are well posed, we prove that the solution’s behavior changes continuously with respect to the initial conditions. Secondly, we chow that the minimization problem has at least one minimum. Finally, the gradient of the cost function is computed using the adjoint state method. Also we present some numerical experiments to show the performance of this approach.

Keywords:

Data assimilation, Regularization, Hybrid method, Heat equation, Inverse problem, Optimization.

1. Introduction

This article is devoted to the identification of a diffusion coefficient in degenerate/singular parabolic equation by the variational method, from final data observation. The problem we treat can be stated as follows:
Consider the following degenerate parabolic equation with singular potential
\begin{equation} \left\{\begin{array}{l} \partial _{t}u +\mathcal{A}(u)=f\\ u(0,t)=u(1,t)=0 \ \forall t\in (0,T) \\ u (x,0)=u_{0}(x)\ \forall x\in \Omega \\ \end{array}% \right. \end{equation}
(1)
where, \(\Omega=(0,1)\), \(f\in L^{2}( \Omega \times (0,T))\) and \(\mathcal{A}\) is the operator defined as \begin{equation*} \mathcal{A}(u)=-\partial_{x}(a(x) \partial _{x}u(x)) -\dfrac{\lambda }{x^{\beta }}u,\ a(x)=k(x)x^\alpha \end{equation*} with \(\alpha\in(0,1)\), \(\beta\in (0,2-\alpha)\), and \(\lambda\leq 0,\) \(0< k(x)\leq c_1\) The formulation of the inverse problem is
\begin{equation} \left\{\begin{array}{l} \text{find } k \in A_{ad} \text{ such that} \\ J(k)=\underset{\kappa \in A_{ad}}{\min } J(\kappa), \end{array}% \right. \end{equation}
(2)
where the cost function \(J\) is defined as follows
\begin{equation} J(k)=\dfrac{1}{2}\left\Vert u (t=T)-u _{obs}\right\Vert _{L^{2}(\Omega)}^{2}, \end{equation}
(3)
subject to \(u\) is the weak solution of the parabolic problem (1). The space \(A_{ad}\) is the admissible set of diffusion coefficients.
The functional \(J\) is non convex, any descent algorithm will be stopped at the meeting of the first local minimum. To stabilize this problem, the classical method is to add to \(J\) a regularizing term coming from the regularization of Tikhonov. So, we obtain the functional
\begin{equation} J_T(\kappa)=\dfrac{1}{2}\left\Vert u (t=T)-u _{obs}\right\Vert _{L^{2}(\Omega)}^{2}+\frac{\varepsilon}{2}\left\Vert \kappa-k ^{b}\right\Vert _{L^{2}(\Omega)}^{2}. \end{equation}
(4)
But, in reality, \(k ^{b}\) is partially known, than the determination of the parameter \(\varepsilon,\) which presents an important difficulty. Until these lines are written, there is no effective method for determining this parameter. In the literature, we found two popular methods : cross-validation and Lcurve (see [1, 2, 3, 4]). For these two methods it is necessary to solve the problem with several different values of the regularization parameter.
To show the difficulty of determining the parameter \(\varepsilon\) when we have a partial knowledge of \(k ^{b}\) (example 20%) in points of space, we did several test for different epsilon values, the results are as follows
\(\varepsilon\) Minimum value of \(J\)
1 \(2.230238.10^{-02}\)
\(10^{-01}\) \(1.277236.10^{-02}\)
\(10^{-02}\) \(1.093206.10^{-02}\)
\(10^{-03}\) \(2.093763.10^{-02}\)
\(10^{-04}\) \(3.029143.10^{-02}\)
\(10^{-05}\) \(2.92163.10^{-03}\)
\(10^{-06}\) \(1.12187.10^{-02}\)
Table 1. Results on the Tikhonov approach. Comparison between different values of regularizing coefficient \(\varepsilon\).

Figure 1. Graph of \(k\) in case \(\varepsilon=10^{-05}\)

In Table 1, the smallest value of \(J\) is reached when \(\varepsilon=10^{-05},\) and the Figure 1 shows that we can't rebuild the coefficient \(k\) in case \(\varepsilon=10^{-05}\). We can conclude that the method of choosing \(\varepsilon\) by doing several tests for different epsilon values is not effective.

To overcome this problem, in case where \(k ^{b}\) is partially known, we propose in this work, a new approach based on Genetic Hybrid algorithms which consists to minimize the functional \(J\) without any regularization. This work is the continuity of [5] where the authors identify the initial state of a degenerate parabolic problem.

Firstly, with the aim of showing that the minimization problem and the direct problem are well-posed, we prove that the solution's behavior changes continuously with respect to the initial conditions. Secondly, we show that the minimization problem has at least one minimum. Finally, The gradient of the cost function is computed using the adjoint state method. Also we present some numerical experiments to show the performance of this approach.

2. Well-posedness

Now we specify some notations we shall use. Let introduce the following functional spaces (see [6, 7, 8])
\begin{equation} V=\left\{ u\in L^{2}(\Omega),u\text{ absolutely continuous on }\left[ 0,1\right] \right\}, \end{equation}
(5)
\begin{equation} S=\left\{ u\in L^{2}(\Omega),\sqrt{a}u_{x}\in L^{2}(\Omega)\text{ and }u(0)=u(1)=0 \right\} , \end{equation}
(6)
\begin{equation} H_{a}^{1}(\Omega)=V \cap S, \end{equation}
(7)
\begin{equation} H_{a}^{2}(\Omega)=\left\{ u\in H_{a}^{1}(\Omega),\text{ }au_{x}\in H^{1}(\Omega)\right\}, \end{equation}
(8)
\begin{equation*} H^1_{\alpha,0}= \lbrace u\in H^1_{\alpha}\mid u(0)=u(1)=0 \rbrace, \end{equation*} \begin{equation*} H^1_{\alpha}= \lbrace {u\in L^2(\Omega)\cap H^1_{Loc}(]0,1])\mid x^{\frac{\alpha}{2}}u_x\in L^2(\Omega)} \rbrace. \end{equation*} With \begin{equation*} \parallel u\parallel ^2_{H^1_a(\Omega)}=\parallel u\parallel ^2_{L^2(\Omega)}+\parallel \sqrt{a} u_x\parallel ^2_{L^2(\Omega)}, \end{equation*} \begin{equation*} \parallel u\parallel^2_{H^2_a(\Omega)}=\parallel u\parallel ^2_{H^1_a(\Omega)}+\parallel (au_x)_x\parallel ^2_{L^2(\Omega)}, \end{equation*} \begin{equation*} < u,v >_{H^1_{\alpha}}=\int_{\Omega} (uv+ k(x)x^{\alpha}u_xv_x)\text{\ }dx. \end{equation*} We recall that (see [8] \(H^1_a\) is an Hilbert space and it is the closure of \(C^\infty_c(0,1)\) for the norm \(\parallel . \parallel_{H^1_a}.\) If \(\frac{1}{\sqrt{a}}\in L^1(\Omega)\) then the following injections \begin{equation*} H^1_a(\Omega)\hookrightarrow L^2(\Omega), \end{equation*} \begin{equation*} H^2_a(\Omega)\hookrightarrow H^1_a(\Omega), \end{equation*} \begin{equation*} H^1(0,T;L^2(\Omega))\cap L^2(0,T;D(A)) \hookrightarrow L^2(0,T;H^1_a)\cap C(0,T;L^2(\Omega)) \end{equation*} are compacts. Firstly, we prove that the problem (1) is well-posed, the functional \(J\) is continuous and G-derivable in \(A_{ad}\).
The weak formulation of the problem (1) is :
\begin{equation} \int_{\Omega}\partial_t u v \ dx+\int_{\Omega} \left(a(x)\partial_x u \partial_x v-\dfrac{\lambda }{x^{\beta }}u v\right) \ dx=\int_{\Omega}fv \ dx, \ \forall v \in H_0^1(\Omega). \end{equation}
(9)
Let the bilinear form
\begin{equation} \mathcal{B}[u,v]=\int_{\Omega} \left(a(x)\partial_x u \partial_x v-\dfrac{\lambda }{x^{\beta }}u v\right) \ dx. \end{equation}
(10)
Since \(a(x)=0\) at \(x=0\) and \(\displaystyle \underset{x \rightarrow 0}{\lim } \dfrac{\lambda }{x^{\beta }}= +\infty,\) the bilinear form \(\mathcal{B}\) is noncoercive and is non-continuous at \(x=0\). Consider the not bounded operator \((\mathcal{O},D(\mathcal{O}))\) where
\begin{equation} \mathcal{O}g=(k(x)x^{\alpha}g_x)_x+\frac{\lambda}{x^{\beta}}g, \ \forall g \in D(\mathcal{O}) \end{equation}
(11)
and \begin{equation*} D(\mathcal{O})=\lbrace {g\in H^1_{\alpha,0}\cap H^2_{Loc}(]0,1])\mid (x^{\alpha}g_x)_x+\frac{\lambda}{x^{\beta}}g\in L^2(\Omega)}\rbrace . \end{equation*} Let
\begin{equation} \displaystyle A_{ad} =\lbrace g\in H^1(\Omega); \Vert g\Vert_{H^1(\Omega)}\leq r \rbrace, \text{ where $r$ is a real strictly positive constant.} \end{equation}
(12)
We recall the following theorem

Theorem 2.1. (see [7, 10]) If \(f=0\) then for all \(u\in L^2(\Omega)\), the problem (1) has a unique weak solution

\begin{equation} u\in C([0,T];L^2(\Omega))\cap C(]0,T];D(\mathcal{O}))\cap C^1(]0,T];L^2(\Omega)) \end{equation}
(13)
if more \(u_{0}\in D(\mathcal{O})\) then
\begin{equation} u\in C([0,T];D(\mathcal{O}))\cap C^1([0,T];L^2(\Omega)) \end{equation}
(14)
if \(f\in L^{2}(]0,1[ \times (0,T) )\) then for all \(u_{0}\in L^2(\Omega)\), the problem (1) has a unique solution
\begin{equation} u\in C([0,T];L^2(\Omega)).\square \end{equation}
(15)
We recall the following theorem

Theorem 2.2. (see [1]) For every \(u_0\in L^2(\Omega)\) and \(f\in L^2(Q_T),\) where \(Q_T=((0,T)\times\Omega)\) there exists a unique solution of problem (1). In particular, the operator \(\mathcal{O}: D(\mathcal{O})\longmapsto L^2(\Omega)\) is non positive and self-adjoint in \(L^2(\Omega)\) and it generates an analytic contraction semigroup of angle \(\pi/2.\) Moreover, let \(u_0\in D(\mathcal{O})\); then
\(f\in W^{1,1}(0,T;L^2(\Omega)\Rightarrow u\in C^0(0,T;D(\mathcal{O}))\cap C^1([0,T];L^2(\Omega)),\)
\(f\in L^2(L^2(Q_T))\Rightarrow u\in H^1(0,T;L^2(\Omega).\)

Theorem 2.3. Let \(u\) the weak solution of (1), the function

\begin{equation} \begin{array}{l} \varphi : H^1(\Omega)\longrightarrow C([0,T];L^2(\Omega)) \\ \ k\longmapsto u \end{array} \end{equation}
(16)
is continuous, and the functional \(J\) is continuous in \(A_{ad}\). Therefore, the problem 2 has at least one solution in \(A_{ad}\) .

Proof. [Proof of Theorem (2.3)] Let \(\delta k \in H^1(\Omega)\) a small variation such that \(k+\delta k \in A_{ad}\) and \(u_0\in D(\mathcal{O}).\) Consider \(\delta u =u ^{\delta }-u\), with \(u\) is the weak solution of (1) with diffusion coefficient \(k\), and \(u ^{\delta }\) is the weak solution of the following problem (17) with diffusion coefficient \(k^{\delta }=k+\delta k.\)

\begin{equation} \left\{\begin{array}{l} \partial _{t}u^{\delta} - \partial_{x}((k+\delta k)x^\alpha\partial _{x}u^{\delta}) -\dfrac{\lambda }{x^{\beta }}u^{\delta}=f(x,t)\in \Omega\times(0,T) \\ u^{\delta}\mid_{x=0}=u^{\delta}\mid_{x=1}=0,\\ u^{\delta}(x,0)=u_0(x) \end{array}% \right. \end{equation}
(17)
(17)-(1) give
\begin{equation}\label{ProbDpsicritical} \left\{ \begin{array}{l} \displaystyle \partial_t (\delta u)-\left((k+\delta k)x^{\alpha}\partial_x \delta u \right)-\left(\delta k x^{\alpha}\partial_x u \right)-\dfrac{\lambda }{x^{\beta }}\delta u =0 \\ \delta u(0,t)=\delta u(1,t)=0 \ \forall t\in (0,T) \\ \delta u (x,0)=0 \ \forall x\in \Omega. \end{array} \right. \end{equation}
(18)
The weak formulation for (18) is
\begin{align} \int_{\Omega}\partial_t (\delta u)vdx+\int_{\Omega}\left((k+\delta k)x^{\alpha}\partial_x (\delta u)\partial_x (v)-\dfrac{\lambda }{x^{\beta }}(\delta u)v \right)dx \nonumber \\ -\int_{\Omega}\left(\delta k x^{\alpha}\partial_x u \right)_{x}vdx=0, \forall v \in H_0^1(\Omega). \end{align}
(19)
Take \(v=\delta u\), then
\begin{align} \int_{\Omega}\partial_t (\delta u)\delta udx+\int_{\Omega}\left((k+\delta k)x^{\alpha}(\partial_x \delta u)^2-\dfrac{\lambda }{x^{\beta }}(\delta u)^2 \right)dx \nonumber \\ -\int_{\Omega}\left(\delta k x^{\alpha}\partial_x u \right)_{x}\delta udx=0. \end{align}
(20)
We have $$\int_{\Omega}\left((k+\delta k)x^{\alpha}(\partial_x \delta u)^2-\dfrac{\lambda }{x^{\beta }}(\delta u)^2 \right)dx\geq 0, $$ this implies that
\begin{equation} \int_{\Omega}\partial_t (\delta u)\delta udx-\int_{\Omega}\left(\delta k x^{\alpha}\partial_x u \right)_{x}\delta udx\leq 0 \end{equation}
(21)
and consequently
\begin{equation} \left |\int_{\Omega}\partial_t (\delta u)\delta udx \right |\leq \int_{\Omega}\left |\left(\delta k x^{\alpha}\partial_x u \right)_{x}\delta u \right |dx, \end{equation}
(22)
then
\begin{equation} \int_{\Omega}\partial_t (\delta u)\delta udx\leq \Vert \delta k\Vert_{L^\infty(\Omega)}\int_{\Omega}\left |\partial_x u \partial_x\delta u\right |dx. \end{equation}
(23)
By integrating between \(0\) and \(t\) with \(t\in [0,T]\) we obtain
\begin{equation} \frac{1}{2}\Vert \delta u(t)\Vert_{L^2(\Omega)}^2\leq \left \| \delta k \right \|_{L^{\infty}(\Omega)}\int_{0}^{T}\int_{\Omega}\left | \partial_x u \partial_x\delta u\right |dxdt, \end{equation}
(24)
since \(u,\delta u\in H^1(0,T;L^2(\Omega)),\) we have \(\displaystyle\int_{0}^{T}\int_{\Omega} \left | \partial_x u \partial_x\delta u\right |dxdt< \infty\),
there is \(C>0,\) such that
\begin{equation} \sup_{t\in [0,T]}\Vert \delta u(t)\Vert_{L^2(\Omega)}^2\leq 2C\Vert \delta k\Vert_{L^{\infty}(\Omega)}, \end{equation}
(25)
which give
\begin{equation} \Vert \delta u(t)\Vert_{C([0,T];L^2(\Omega))}^2\leq 2C\Vert \delta k\Vert_{H^1(\Omega)}. \end{equation}
(26)
Hence, the functional \(J\) is continuous in
\begin{equation} A_{ad} =\lbrace u\in H^1(\Omega); \Vert u\Vert_{H^1(\Omega)}\leq r \rbrace. \end{equation}
(27)
We have \(H^{1} (\Omega ) \underset{compact}{\hookrightarrow}L^{2}(\Omega ).\) Since the set \(A_{ad}\) is bounded in \(H^1(\Omega)\), then \(A_{ad}\) is a compact in \(L^{2}(\Omega ).\) Therefore, \(J\) has at least one minimum in \(A_{ad}\)

Now we compute the gradient of \(J\) with the adjoint state method.

3. Gradient of \(J\)

We define the Gâteaux derivative of \(u\) at \(k\) in the direction \(h\in L^2( \Omega \times ] 0,T[)\), by
\begin{equation} \hat{u}=\lim_{s \to 0} \frac{u(k+s h)-u(k) }{s }, \end{equation}
(28)
\(u(k+s h)\) is the weak solution of (1) with diffusion coefficient \(k+s h\), and \(u(k)\) is the weak solution of (2) with diffusion coefficient \(k\). The Gâteaux (directional) derivative of (1) at \(k\) in some direction \(h\in L^2(\Omega)\) gives \begin{equation} \left\{ \begin{array}{l} \displaystyle \frac{\partial \hat{u}}{\partial t}-\frac{\partial}{\partial x}\left ( kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\right)-\frac{\partial}{\partial x}\left (hx^{\alpha}\frac{\partial u}{\partial x}\right)-\frac{\lambda}{x^{\beta}}\hat{u}=0 \\ \displaystyle \hat{u}(x=0,t)=\hat{u}(x=1,t)=0,\\ \displaystyle \hat{u}(x,t=0)=0. \end{array} \right. \end{equation} We introduce the adjoint variable \(P\), and we integrate, $$ \int_{0}^{T}\int_{0}^{1}\left(\frac{\partial \hat{u}}{\partial t}p-\frac{\partial}{\partial x}\left ( kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\right)p-\frac{\partial}{\partial x}\left (hx^{\alpha}\frac{\partial u}{\partial x}\right)p\right)dxdt=0. $$ Calculate separately each term :
\begin{align*} \int_{0}^{T}\int_{0}^{1}\frac{\partial \hat{u}}{\partial t}p =\int_{0}^{1}\left [ \hat{u}p \right ]_{0}^{T}dx-\int_{0}^{1}\int_{0}^{T}\frac{\partial p}{\partial t}\hat{u}dtdx, \end{align*} \begin{align*} \int_{0}^{T}\int_{0}^{1}\frac{\partial }{\partial x}\left (kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\right)& p dxdt =\int_{0}^{T}\left [ \left (kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\right )p \right ]_{0}^{1}dt-\int_{0}^{T}\int_{0}^{1}kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\frac{\partial p}{\partial x} dxdt\\ & = \int_{0}^{T}\left [ \left (kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\right )p \right ]_{0}^{1}dt\\ &- \int_{0}^{T}\left [kx^{\alpha}\hat{u}\frac{\partial p}{\partial x}\right ]_{0}^{1}dt+\int_{0}^{T}\int_{0}^{1}\frac{\partial}{\partial x}\left (kx^{\alpha}\frac{\partial p}{\partial x}\right )\hat{u}dxdt\\ & = \int_{0}^{T}\left [ \left (kx^{\alpha}\frac{\partial \hat{u}}{\partial x}\right )p \right ]_{0}^{1}dt+\int_{0}^{T}\int_{0}^{1}\frac{\partial}{\partial x}\left (kx^{\alpha}\frac{\partial p}{\partial x}\right )\hat{u}dxdt.\\ \end{align*} \begin{align*} \int_{0}^{T}\int_{0}^{1}p\left (\frac{\partial}{\partial x}\left (hx^{\alpha}\frac{\partial u}{\partial x}\right ) \right )dxdt & =\int_{0}^{T}\left [hx^{\alpha} \frac{\partial u}{\partial x}p \right ]_{0}^{1}dt-\int_{0}^{T}\int_{0}^{1} h\frac{\partial u}{\partial x}\frac{\partial p}{\partial x}dxdt. \end{align*} We pose \(\quad p(x=1,t)=0,\quad p(x=0,t)=0, \quad p(x,t=T)=0,\) we obtain
\begin{equation}\label{EqJEquation00} \begin{gathered} \displaystyle \int_0^{T}\langle \hat{u},\partial _{t}P-AP\rangle_{L^2(\Omega)} dt =\langle h ,\int_{0}^{T}x^{\alpha} \frac{\partial p}{\partial x}\frac{\partial u}{\partial x}dt\rangle_{L^2(\Omega)} \\ P(x=0)=P(x=1)=0,\quad P(T)=0. \end{gathered} \end{equation}
(29)
The discretization in time of (29), using the Rectangular integration method, gives
\begin{equation}\label{EqJEquation} \begin{gathered} \sum_{j=0}^{M+1}\langle \hat{u}(t_j),\partial _{t}P(t_j) -AP(t_j)\rangle_{L^2(\Omega)} \Delta t=\langle h ,\int_{0}^{T}x^{\alpha} \frac{\partial p}{\partial x}\frac{\partial u}{\partial x}dt\rangle_{L^2(\Omega)}\\ P(x=0)=P(x=1)=0 ,\quad P(T)=0. \end{gathered} \end{equation}
(30)
With \begin{equation*} t_j=j\Delta t, \quad j\in \{0,1,2,\dots,M+1\}, \end{equation*} where \(\Delta t\) is the step in time and \(T=(M+1)\Delta t\). The Gateaux derivative of \(J\) at \(k\) in the direction \(h\in L^2(\Omega)\) is given by \begin{equation*} \hat{J}(h)=\lim_{s \to 0} \frac{J(k+s h)-J(k)}{s }. \end{equation*} After some computations, we arrive at
\begin{equation}\label{EqJ3} \hat{J}(h)=\langle u(T)-u_{obs},\hat{u}(T)\rangle _{L^2(\Omega)}. \end{equation}
(31)
The adjoint model is
\begin{equation}\label{ProbAdjointPremProb} \begin{gathered} \partial _{t}P(T)-AP(T)=\frac{1}{\Delta t}(u(T)-u_{obs}), \quad \partial _{t}P(t_j)-AP(t_j)=0 \quad \forall t_j\neq T\\ P(x=0)=P(x=1)=0\quad \forall t_j\in ] 0;T[ \\ P(T)=0. \end{gathered} \end{equation}
(32)
From equations (30), (31) and (32), the gradient of \(J\) is given by
\begin{equation} \frac{\partial J}{\partial k}=\int_{0}^{T}x^{\alpha} \frac{\partial p}{\partial x}\frac{\partial u}{\partial x}dt. \end{equation}
(33)
Problem (32) is retrograde, we make the change of variable \(t\longleftrightarrow T-t\).

4. Numerical scheme

Step 1. Full discretization
Discrete approximations of these problems need to be made for numerical implementation. To resolve the direct problem and adjoint problem, we use the Method \(\theta\)-schema in time. This method is unconditionally stable for \(1 >\theta \geq \dfrac{1}{2}.\) Let \(h\) the steps in space and \(\Delta t\) the steps in time. Let \begin{equation*} x_{i}=ih, \ \ \ \ i\in \left\{ 0,1,2...N+1\right\}, \end{equation*} \begin{equation*} c(x_{i})=a(x_{i}), \end{equation*} \begin{equation*} t_{j}=j\Delta t, \ \ \ j\in \left\{0,1,2...M+1\right\}, \end{equation*} \begin{equation*} f_{i}^{j}=f(x_{i},t_{j}), \end{equation*} we put

\begin{equation} u_{i}^{j}=u(x_{i},t_{j}). \end{equation}
(34)
Let
\begin{equation} da(x_{i})=\dfrac{c(x_{i+1})-c(x_{i})}{h}, \end{equation}
(35)
and
\begin{equation} b(x)=-\dfrac{\lambda }{x^{\beta }}. \end{equation}
(36)
Therefore
\begin{equation} \partial _{t}u +\mathcal{A}u =f \end{equation}
(37)
is approximated by \begin{align*} -\dfrac{\theta \Delta t}{h^{2}}c(x_{i})u_{i-1}^{j+1}+\left( 1+\dfrac{2\theta \Delta t}{h^{2}}c(x_{i})+da(x_{i})\dfrac{\theta \Delta t}{h}+b(x_{i})\theta\Delta t\right) u_{i}^{j+1}\\-(\dfrac{\theta \Delta t}{h^{2}}c(x_{i})+da(x_{i})\dfrac{\theta \Delta t}{h})u_{i+1}^{j+1} \end{align*} \begin{align*} =\dfrac{\left( 1-\theta \right) \Delta t}{h^{2}}c(x_{i})u_{i-1}^{j}\\ +\left( 1-\dfrac{\left( 1-\theta \right) \Delta t}{h}da(x_{i})-\dfrac{2\left( 1-\theta \right) \Delta t}{h^{2}}c(x_{i})-\left( 1-\theta \right)b(x_{i})\Delta t\right) u_{i}^{j} \end{align*}
\begin{equation} +\left( \dfrac{\left( 1-\theta \right) \Delta t}{h}da(x_{i})+\dfrac{\left( 1-\theta \right) \Delta t}{h^{2}}c(x_{i})\right)u_{i+1}^{j}+\Delta t.[\left( 1-\theta \right) f_{i}^{j}+\theta f_{i}^{j+1}]. \end{equation}
(38)
Let us define
\begin{equation} g_{1}(x_{i})=-\dfrac{\theta \Delta t}{h^{2}}c(x_{i}), \end{equation}
(39)
\begin{equation} g_{2}(x_{i})=1+\dfrac{2\theta \Delta t}{h^{2}}c(x_{i})+da(x_{i})\dfrac{\theta \Delta t}{h}+b(x_{i})\theta \Delta t, \end{equation}
(40)
\begin{equation} g_{3}(x_{i})=-\dfrac{\theta \Delta t}{h^{2}}c(x_{i})-da(x_{i})\dfrac{\theta \Delta t}{h}, \end{equation}
(41)
\begin{equation} k_{1}(x_{i})=\dfrac{\left( 1-\theta \right) \Delta t}{h^{2}}c(x_{i}), \end{equation}
(42)
\begin{equation} k_{2}(x_{i})=1-\dfrac{\left( 1-\theta \right) \Delta t}{h}da(x_{i})-\dfrac{2\left(1-\theta \right) \Delta t}{h^{2}}c(x_{i})-\left( 1-\theta \right)b(x_{i})\Delta t, \end{equation}
(43)
\begin{equation} k_{3}(x_{i})=\dfrac{\left( 1-\theta \right) \Delta t}{h}da(x_{i})+\dfrac{\left(1-\theta \right) \Delta t}{h^{2}}c(x_{i}). \end{equation}
(44)
Let \(u^{j}=\left(u_{i}^{j}\right) _{i\in \left\{ 1,2,..N\right\} },\) finally we get
\begin{equation}\label{SystemDiscri} \left\{ \begin{array}{l} \mathcal{D}u^{j+1}=\mathcal{B}u^{j}+\mathcal{V}^{j}\ \text{ wher} \ j\in \left\{ 1,2,..M\right\} \\ u^{0}=\left(u_{0}(ih)\right) _{i\in \left\{ 1,2,..N\right\} },% \end{array}% \right. \end{equation}
(45)
where \begin{equation*} \mathcal{D}=% \begin{bmatrix} g_{2}(x_{1}) & g_{3}(x_{1}) & 0 & & & & & 0 \\ g_{1}(x_{2}) & g_{2}(x_{2}) & g_{3}(x_{2}) & 0 & & & & \\ 0 & g_{1}(x_{3}) & g_{2}(x_{3}) & g_{3}(x_{3}) & 0 & & & \\ & 0 & g_{1}(x_{4}) & g_{2}(x_{4}) & g_{3}(x_{4}) & 0 & & \\ & & 0 & . & . & . & 0 & \\ & & & . & . & . & . & 0 \\ & & & & 0 & g_{1}(x_{N-1}) & g_{2}(x_{N-1}) & g_{3}(x_{N-1}) \\ 0 & & & & & 0 & g_{1}(x_{N}) & g_{2}(x_{N})% \end{bmatrix}% \end{equation*} \begin{equation*} \mathcal{B}=% \begin{bmatrix} k_{2}(x_{1}) & k_{3}(x_{1}) & 0 & & & & & 0 \\ k_{1}(x_{2}) & k_{2}(x_{2}) & k_{3}(x_{2}) & 0 & & & & \\ 0 & k_{1}(x_{3}) & k_{2}(x_{3}) & k_{3}(x_{3}) & 0 & & & \\ & 0 & k_{1}(x_{4}) & k_{2}(x_{4}) & k_{3}(x_{4}) & 0 & & \\ & & 0 & . & . & . & 0 & \\ & & & . & . & . & . & 0 \\ & & & & 0 & k_{1}(x_{N-1}) & k_{2}(x_{N-1}) & k_{3}(x_{N-1}) \\ 0 & & & & & 0 & k_{1}(x_{N}) & k_{2}(x_{N})% \end{bmatrix}% \end{equation*} \begin{equation*} \mathcal{V}^{j}=% \begin{bmatrix} \Delta t.[\left( 1-\theta \right) f(x_{1},t_{j})+\theta f(x_{1},t_{j}+\Delta t)]\\ \Delta t.[\left( 1-\theta \right) f(x_{2},t_{j})+\theta f(x_{2},t_{j}+\Delta t)] \\ . \\ . \\ . \\ . \\ \Delta t.[\left( 1-\theta \right) f(x_{N-1},t_{j})+\theta f(x_{N-1},t_{j}+\Delta t)] \\ \Delta t.[\left( 1-\theta \right) f(x_{N},t_{j})+\theta f(x_{N},t_{j}+\Delta t)]% \end{bmatrix}% \end{equation*}

Discritization of the Functional J

Step 2. Discretization of the functional \(J\)

\begin{equation} \displaystyle J(\kappa)=\dfrac{1}{2}\int_0^1(u(x,t=T)-u_{obs}(x))^2 dx. \end{equation}
(46)
We recall that the method of Thomas Simpson to calculate an integral is \begin{equation*} \displaystyle \int_a^b f(x) \ dx\simeq\frac{h}{2}\left[ f(x_0)+2 \sum_{i=1}^{\frac{N+1}{2}-1}f(x_{2i})+4 \sum_{i=1}^{\frac{N+1}{2}}f(x_{2i+1})+f(x_{N+1}) \right] \end{equation*} with \(x_0=a\), \(x_{N+1}=b\), \(x_i=a+ih\), \(i \ \in \left\lbrace 1..N+1\right\rbrace\). Let the function
\begin{equation} \varphi(x)=(u(x,T)-u_{obs}(x))^2 \ \ \ \forall x\in \Omega. \end{equation}
(47)
We have \begin{equation*} \displaystyle \int_0^1 \varphi(x) \ dx\simeq\frac{h}{2}\left[ \varphi(0)+2 \sum_{i=1}^{\frac{{N+1}}{2}-1}\varphi(x_{2i})+4 \sum_{i=1}^{\frac{{N+1}}{2}}\varphi(x_{2i+1})+\varphi(1) \right]. \end{equation*} Therefore
\begin{equation} \begin{array}{l} \displaystyle J(\kappa)\simeq\dfrac{h}{4}\left[ \varphi(0)+2 \sum_{i=1}^{\frac{{N+1}}{2}-1}\varphi(x_{2i})+4 \sum_{i=1}^{\frac{{N+1}}{2}}\varphi(x_{2i+1})+\varphi(1) \right].\square \end{array} \end{equation}
(48)

5. Genetic algorithme and Hybrid methode

The Genetic Algorithms (GA) are adaptive search and optimization methods that are based on the genetic processes of biological organisms. Their principles have been first laid down by Holland. The aim of GA is to optimize a problem-defined function, called the fitness function. To do this, GA maintain a population of individuals (suitably represented candidate solutions) and evolve this population over time. At each iteration, called generation, the new population is created by the process of selecting individuals according to their level of fitness in the problem domain and breeding them together using operators borrowed from natural genetics, as, for instance, crossover and mutation. As the population evolves, the individuals in general tend toward the optimal solution. The basic structure of a GA is the following:
1. Initialize a population of individuals;
2. Evaluate each individual in the population;
3. while termination criterion not reached do
{
4. Select individuals for the next population;
5. Apply genetic operators (crossover, mutation) to produce new individuals;
6. Evaluate the new individuals;
}
7. return the best individual.
The hybrid methods combine principles from genetic algorithms and other optimization methods. In this approach, we will combine the genetic algorithm with method descent (steepest descent algorithm (FP)) as following:
We assume that we have a partial knowledge of background state \(k^b\) at certain points \((x_i)_{i\in I}\), \(I\subset \left\lbrace 1,2,..,N+1\right\rbrace\).
We assume the individual is a vector \(k\), the population is a set of individuals.
The initialization of individual is as following
\begin{equation}\label{EQCHOSEN} \begin{array}{l} \text{ for } i=1 \text{ to } N+1\\ \text{ if } i \in I \ k(x_i) \text{ is chosen in the vicinity of } k^b(x_i) \\ else \ k(x_i) \text{ is chosen randomly }\\ \text{end if }\\ \text{end for}\\ \end{array} \end{equation}
(49)
Starting by initial population, we apply genetic operators (crossover, mutation) to produce a new population in which each individual is an initial point for the descent method (FP). When a specific number of generations is reached without improvement of the best individual, only the fittest individuals (e.g. the first 10% fittest individuals in the population) survive. The remaining die and their place is occupied by new individuals with new genetic (45% are chosen randomly, the other 45% are chosen as (49)). At each generation we keep the best. The algorithm ends when \(\displaystyle \mid J(k)\mid < \mu \) or \(generation\geqslant Maxgen\), where \(\mu\) is a given precision (see Figure 2).
The main steps for descent method (FP) at each iteration are:
- Calculate \(u^{j}\) solution of (1) with coefficient \(k^j\)
- Calculate \(P^{j}\) solution of adjoint problem
- Calculate the descent direction \(d_{j}=-\nabla J(k^j)\)
- Find \(\displaystyle t_{j}=\underset{t>0}{argmin}\) \(J(k^j+td_{j})\)
- Update the variable \(k^{j+1}=k^j+t_{j}d_{j}\).
The algorithm ends when \(\displaystyle \mid J(k^j)\mid< \mu\), where \(\mu\) is a given small precision. \(t_{j}\) is chosen by the inaccurate linear search by Rule Armijo-Goldstein as following: let \(\alpha_{i}, \beta \in [0,1[\) and \(\alpha>0\)
if \(J(k^{j}+\alpha_i d_{j})\leqslant J(k^{j})+\beta \alpha_{i} d^{T}_{j}d_{j}\)
\(t_{j}=\alpha_i\) and stop
if not
\(\alpha_i = \alpha \alpha_i\).

Figure 2. Hybrid Algorithm

6. Numerical experiments

In this section, we do three tests: In the first test, we recall the result obtained by the algorithm of simple descent with \(\varepsilon=10^{-5}\) (Figure 1), In the second test, we turn only the genetic algorithm (Figure 2). Finally, in the third test, we turn test with hybrid approach with parameters \(\alpha=1/3,\)\(\beta=3/4\), \(\lambda=-2/3\), \(N=M=99,\) number of individuals \(=40\) , and number of generations \(=2000\). In the figures below, the exact function is drawn in red and rebuild function in blue.

Figure 3. Test with simple descent and \(\varepsilon=10^{-5}\).

Figure 4. Test with genetic algorithm.

Figure 5. Test with hybrid algorithm.

These tests show that we can't rebuild the coefficient in the diffusion \(k\) by the descent method and genetic approach (Figure 3 and Figure 4), and the hybrid approach proves effective to reconstruct this coefficient (Figure 5).

7. Conclusion

We have presented in this paper a new approach based on a hybrid genetic algorithm for the determination of a coefficient in the diffusion term of some degenerate /singular one-dimensional linear parabolic equation from final data observations. Firstly, with the aim of showing that the minimization problem and the direct problem are well-posed, we have proved that the solution's behavior changes continuously with respect to the initial conditions. Secondly, we have shown that the minimization problem has at least one minimum. Finally, we proved the differentiability of the cost function, which gives the existence of the gradient of this functional, that is computed using the adjoint state method. Also we have presented some numerical experiments to show the performance of this approach to construct the diffusion coefficient of a degenerate parabolic problem.

Competing Interests

The authors declares that he has no competing interests.

References

  1. Vogel, C. R. (2002). Computational methods for inverse problems (Vol. 23). SIAM Frantiers in applied mathematics. [Google Scholor]
  2. Engl, H. W., Hanke, M., & Neubauer, A. (1996). Regularization of inverse problems (Vol. 375). Springer Science & Business Media. [Google Scholor]
  3. Hansen, P. C. (1992). Analysis of discrete ill-posed problems by means of the L-curve. SIAM review, 34(4), 561-580. [Google Scholor]
  4. Tikhonov, A. N., Goncharsky, A., Stepanov, V. V., & Yagola, A. G. (2013). Numerical methods for the solution of ill-posed problems (Vol. 328). Springer Science & Business Media. [Google Scholor]
  5. Atifi, K., Balouki, Y., Essoufi, E. H., & Khouiti, B. (2017). Identifying initial condition in degenerate parabolic equation with singular potential. International Journal of Differential Equations, 2017. [Google Scholor]
  6. Alabau-Boussouira, F., Cannarsa, P., & Fragnelli, G. (2006). Carleman estimates for degenerate parabolic operators with applications to null controllability. Journal of Evolution Equations, 6(2), 161-204. [Google Scholor]
  7. Hassi, E. M., Khodja, F. A., Hajjaj, A., & Maniar, L. (2013). Carleman estimates and null controllability of coupled degenerate systems. J. of Evol. Equ. and Control Theory, 2, 441-459. [Google Scholor]
  8. Cannarsa, P., & Fragnelli, G. (2006). Null controllability of semilinear degenerate parabolic equations in bounded domains. Electronic Journal of Differential Equations (EJDE)[electronic only], 136, 1-20.[Google Scholor]
  9. Emamirad, H., Goldstein, G., & Goldstein, J. (2012). Chaotic solution for the Black-Scholes equation. Proceedings of the American Mathematical Society, 140(6), 2043-2052.[Google Scholor]
  10. Vancostenoble, J. (2011). Improved Hardy-Poincaré inequalities and sharp Carleman estimates for degenerate/singular parabolic problems. Discrete Contin. Dyn. Syst. Ser. S, 4(3), 761-790. [Google Scholor]
  11. Hasanov, A., DuChateau, P., & Pektaş, B. (2006). An adjoint problem approach and coarse-fine mesh method for identification of the diffusion coefficient in a linear parabolic equation. Journal of Inverse and Ill-posed Problems jiip, 14(5), 435-463.[Google Scholor]
]]>
Analytic study on hilfer fractional langevin equations with impulses https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/analytic-study-on-hilfer-fractional-langevin-equations-with-impulses/ Fri, 14 Dec 2018 09:01:32 +0000 https://old.pisrt.org/?p=1643
OMA-Vol. 2 (2018), Issue 2, pp. 129–141 | Open Access Full-Text PDF
S. Harikrishnan, E. M. Elsayed, K. Kanagarajan
Abstract: In this paper, we find a solution of a new type of Langevin equation involving Hilfer fractional derivatives with impulsive effect. We formulate sufficient conditions for the existence and uniqueness of solutions. Moreover, we present Hyers-Ulam stability results.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Analytic study on hilfer fractional langevin equations with impulses

S. Harikrishnan, E. M. Elsayed\(^1\), K. Kanagarajan
Department of Mathematics, Sri Ramakrishna Mission Vidyalaya College of Arts and Science, Coimbatore-641020, India.; (S.H & K.K)
Department of Mathematics, Faculty of Science,King Abdulaziz University, Jeddah 21589, Saudi Arabia.; (E.M.E)
Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt. (K.K)
\(^{1}\)Corresponding Author;  emmelsayed@yahoo.com

Copyright © 2018 S. Harikrishnan, E. M. Elsayed, K. Kanagarajan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we find a solution of a new type of Langevin equation involving Hilfer fractional derivatives with impulsive effect. We formulate sufficient conditions for the existence and uniqueness of solutions. Moreover, we present Hyers-Ulam stability results.

Keywords:

Langevin equation; Impulsive condition; Fixed point theorem; Ulam stability.

1. Introduction

Fractional differential equation (FDEs) has gained increasing attention because of their varied applications in applied sciences and engineering, see the monograph [1, 2, 3]. The memory and hereditary of various material and process can be properly described as FDEs. Due to their importance and necessity of FDEs many researchers focused their work towards existence theory and stability criteria. In this work, we study existence of solution for FDEs with Hilfer fractional derivative (HFD) which was initiated by Hilfer [1]. HFD interpolate both classical Riemann-Liouville (RL) and the Liouville-Caputo (LC) fractional derivatives. Recently, HFD is studied in many papers for detailed study, see [4, 5, 6, 7, 8, 9, 10].

In 1908, Langevin introduced a concept of an equation of motion of a Brownian particle which is named after Langevin equation. Langevin equations have been widely used to describe stochastic differential equation [11]. For systems in complex media, standard Langevin equation does not provide the correct description of the dynamics. As a result, various generalizations of Langevin equations have been offered to describe dynamical processes in a fractal medium. One such generalization is the generalized Langevin equation which incorporates the fractal and memory properties with a dissipative memory kernel into the Langevin equation. These give a rise to the Langevin equation involving fractional order. In 2007, Fa [12] discussed variance and velocity correlation of Langevin equations with both RL and LC fractional derivative. In 2011, Existence of solutions is analysed in [13]. Since them many authors discussed existence of solution with different conditions, see [14, 15, 16, 17].

Impulsive differential equations have been focused since it serves as an important tool to characterize the phenomena in which sudden, discontinuous jumps occur in various fields of science and engineering, and impulsive FDEs have received many attentions, see [18, 19, 20].

The concept of stability for a functional equation arises when we replace the functional equation by an inequality which acts as a perturbation of the equation. Considerable attention has been given to the study of the Ulam-Hyers (UH) and Ulam-Hyers-Rassias(UHR) stability. More details from historical point of view and recent developments of such stabilities are reported in [21, 22, 23, 24, 25, 26].

Consider the following system of Langevin differential equation involving HFD with impulse effect
\begin{align} \label{e1} \begin{cases} D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda)x(t) = f(t, x(t)), \quad t \in J^{'}:= J \setminus \left\{t_1, t_2,...,t_m\right\},\\ J= [a, b], \ \ t \notin t_k,\\ \Delta I^{1-\gamma}x(t)|_{t=t_k} = \psi_k (x(t_k)), \quad t = t_k, \ \ \ k = 1, 2, ..., m,\\ I^{1-\gamma} x(t)|_{t=a} = x_a, \quad \gamma = (\alpha_1 + \alpha_2)(1 - \beta) + \beta, \end{cases} \end{align}
(1)
where \(D^{\alpha_1, \beta}, \ D^{\alpha_2, \beta} (0 < (\alpha_1, \alpha_2) < 1, 0 \leq \beta \leq 1)\) are the GRL fractional derivative of orders \(\alpha_1, \ \alpha_2\) and type \(\beta\). Here, the function \(f:J\times R \rightarrow R\) is continuous, \(I_k: R \rightarrow R, \ \mbox{and} \ a \in R,\ a = t_0 < t_1 < ...< t_m < t_{m+1} = b\), \(\Delta I^{1-\gamma}x(t)|_{t=t_k} = I^{1-\gamma}x(t_k^+) - I^{1-\gamma}x(t_k^-)\), \(I^{1-\gamma}x(t_k^+) = \lim_{h \rightarrow 0+} x(t_k + h)\) and \(I^{1-\gamma}x(t_k^-) = \lim_{h \rightarrow 0-} x(t_k + h)\) represent the right and left limits of \(x(t)\) at \(t=t_k\). The existence and uniqueness results for the problem (1), some of the following conditions have to be satisfied;
  • [(H1)] Let \(f:J\times R \rightarrow R\) be a continuous function and there exists a positive constant \(L_f > 0\), such that \begin{eqnarray*} \left|f(t,x)-f(t,y)\right| \leq L_f \left| x - y \right|, \ \mbox{for all} \ x,y \in R. \end{eqnarray*}
  • [(H2)] Let the functions \(\psi_k : R \rightarrow R\) are continuous and there exists a constant \(L_{\psi} > 0\), such that $$ \left|\psi_k(x)- \psi_k(y)\right| \leq L_{\psi} \left| x - y \right| , \ \mbox{for all} \ x,y \in R, k = 1, 2, ... , m. $$
  • [(H3)] There exists an increasing finctions \(\varphi \in PC_{1-\gamma} (J, R)\) and there exists \(\lambda_{\varphi} > 0\) such that for any \(t \in J\) $$ I^\alpha \varphi(t) \leq \lambda_{\varphi} \varphi(t). $$
The paper is organized as follows: In Section 2, we present some necessary definitions and preliminary results that will be used to prove our main results. The proofs of our main results are given in Section 3.

2.Preliminaries

In this section, we present some known definition and results that help us in proving of our main results. Consider the following space $$ PC(J, R) = \left\{x : J \rightarrow R : x(t) \in C(t_k, t_{k+1}], k = 0,...,m;\right\} $$ there exists \(x(t_k^+)\) and \(x(t_k^-)\). Now we consider the weighted space \(PC_{\gamma}(J, R)\), $$ PC_{\gamma}(J, R) = \left\{x : (t - t_k)^{\gamma} x(t) |_{t \in [t_k, t_{k+1}]} \in C[t_k, t_{k+1}], k = 0,...,m \right\}, $$ where \(0 \leq \gamma < 1\), which is a Banach space with norm $$ \left\|x\right\|_{PC_{\gamma}} = \sup_{t \in (t_k, t_{k+1}]} \left\{(t - t_k)^{\gamma} x(t)\right\}, k = 0,...,m. $$

Definition 2.1. [2] The Riemann-Liouville (RL) fractional integral of order \(\alpha > 0\) of function \(f : [0, \infty) \rightarrow R\) can be written as \begin{eqnarray*} I^{\alpha} f(t) = \frac{1}{ \Gamma (\alpha)} \int_{0}^{t}(t-s)^{\alpha-1} f(s) ds. \end{eqnarray*}

Definition 2.2. [2] The RL fractional derivative of order \(\alpha > 0\) of a continuous function \(f : [0, \infty) \rightarrow R\) can be written as \begin{eqnarray*} D^{\alpha} f (t) = \frac{1}{\Gamma(n - \alpha)} \left(\frac{d}{dt}\right)^n \int_{0}^{t} (t-s)^{\alpha - n + 1} f(s) ds, \end{eqnarray*} provided that the right side is pointwise defined on \([0, \infty)\).

Definition 2.3. [2] The LC fractional derivative of order \(\alpha > 0\) of a continuous function \(f : [0, \infty) \rightarrow R\) can be written as \begin{eqnarray*} {}^{C}D^{\alpha} f (t) = D^{\alpha} \left[ f(t) - \sum_{k=0}^{n-1} \frac{t^k}{k!} f^{k}(0) \right], \ t > 0, n-1 < \alpha < n. \end{eqnarray*}

Definition 2.4. [1] The HFD of order \(0 < \alpha < 1\) and \(0 \leq \beta \leq 1\) of function \(f(t)\) is defined by $$ D^{\alpha , \beta} f(t) =( I^{\beta(1-\alpha)}D(I^{(1-\beta)(1-\alpha)}f))(t) . $$ The HFD is considered as an interpolation between the RL and LC fractional derivative and the relations are given below.

Remark 2.5. (i) Operator \(D^{\alpha , \beta}\) also can be written as $$ D^{\alpha , \beta} = ( I^{\beta(1-\alpha)}D(I^{(1-\beta)(1-\alpha)})) = I^{\beta(1-\alpha)} D^{\gamma}, \ \ \ \gamma = \alpha + \beta - \alpha \beta . $$ (ii) If \(\beta = 0\), then \(D^{\alpha , \beta} = D^{\alpha, 0}\) is called RL fractional derivative.
(iii) If \(\beta = 1\), then \(D^{\alpha , \beta} = I^{1-\alpha} D \) is called LC fractional derivative.

Lemma 2.6. [10] If \(\alpha > 0\) and \(\beta > 0\), then there exists $$ \left[I^{\alpha} (t)^{\beta-1}\right](x) = \frac{\Gamma (\beta)}{\Gamma{(\beta +\alpha)}} x^{\beta+\alpha-1}, $$ and $$ \left[D^{\alpha} (t)^{\alpha-1}\right](x) = 0 \ , \ \ \ 0 < \alpha < 1 . $$

Lemma 2.7. [10] If \(\alpha > 0\) and \(\beta > 0\) and \(f \in L^{1} (a, b] \), then there exists the following properties $$ I^{\alpha} I^{\beta} f(t) = I^{\alpha + \beta} f(t), $$ and $$ D^{\alpha} I^{\alpha} f(t) = f(t). $$

Next, we shall give the definitions and the criteria of UH stability and UHR stability for Langevin differential equations with impulsive effect by GRL fractional derivative. Let \(\epsilon\) be a positive number and \(\varphi : J \rightarrow R^{+}\) be a continuous function, for every \(t \in J^{'}\) and \(k = 1, 2,..., m\), we have the following inequalities
\begin{eqnarray}\label{11} \left\{\begin{array}{llll} \left|D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda) z(t) - f(t, z(t))\right| & \leq & \epsilon, \\ \left|\Delta I^{1-\gamma}z(t)|_{t=t_k} - \psi_k (z(t_k))\right| & \leq & \epsilon, \end{array}\right. \end{eqnarray}
(2)
\begin{eqnarray}\label{13} \left\{\begin{array}{llll} \left|D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda) z(t) - f(t, z(t))\right| &\leq& \epsilon\varphi(t), \\ \left|\Delta I^{1-\gamma}z(t)|_{t=t_k} - \psi_k (z(t_k))\right| &\leq& \epsilon\varphi(t), \end{array}\right. \end{eqnarray}
(3)
\begin{eqnarray}\label{14} \left\{\begin{array}{llll} \left|D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda) z(t) - f(t, z(t))\right| &\leq& \varphi(t), \\ \left|\Delta I^{1-\gamma}z(t)|_{t=t_k} - \psi_k (z(t_k))\right| &\leq& \varphi(t), \end{array}\right. \end{eqnarray}
(4)

Definition 2.8. The system equations given in (1) is UH stable if there exists a real number \(C_f > 0\) such that for each \(\epsilon > 0\) and for each solution \(z \in PC_{1-\gamma} (J, R)\) of the inequality (2) there exists a solution \(x \in PC_{1-\gamma} (J, R)\) of Equation (1) with $$ \left|z(t) - x(t)\right| \leq C_f \ \epsilon, \quad t \in J. $$

Definition 2.9. The system equations given in (1) is generalized UH stable if there exist \(\varphi \in PC_{1-\gamma} (J, R^+)\), \(\varphi_f (0) = 0\) such that for each solution \(z \in PC_{1-\gamma} (J, R)\) of the inequality (2) there exists a solution \(x \in PC_{1-\gamma} (J, R)\) of Equation (1) with $$ \left|z(t) - x(t)\right| \leq \varphi_f \ \epsilon, \quad t \in J. $$

Definition 2.10. The system equations given in (1) is UHR stable with respect to \(\varphi \in PC_{1-\gamma} (J, R^+)\) if there exists a real number \(C_f > 0\) such that for each solution \(z \in PC_{1-\gamma} (J, R)\) of the inequality (3) there exists a solution \(x \in PC_{1-\gamma} (J, R)\) of Equation (1) with $$ \left|z(t) - x(t)\right| \leq C_f \ \epsilon \varphi(t), \quad t \in J. $$

Definition 2.11. The system equations given in (1) is generalized UHR stable with respect to \(\varphi \in PC_{1-\gamma^+} (J, R)\) if there exists a real number \(C_{f,\varphi} > 0\) such that for each solution \(z \in PC_{1-\gamma} (J, R)\) of the inequality (4) there exists a solution \(x \in PC_{1-\gamma} (J, R)\) of Equation (1) with $$ \left|z(t) - x(t)\right| \leq C_{f, \varphi} \varphi(t), \quad t \in J. $$

Remark 2.12. A function \(z \in PC_{1-\gamma} (J, R)\) is a solution of the inequality $$ \left|D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda) z(t) - f(t, z(t))\right| \leq \epsilon , $$ if and only if there exist a function \(g\in PC_{1-\gamma}(J, R)\) and a sequence \(g_k\), \(k = 1, 2, . . . , m\) (which depend on \(z\)) such that

    (i) \(\left|g(t)\right| \leq \epsilon, \ \left|g_k\right| < \epsilon \).
    (ii) \(D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda) z(t) = f(t, z(t)) + g(t)\).
    (iii) \(\Delta I^{1-\gamma} z(t)|_{t_k} = \psi_k (z(t_k)) + g_k \).

Lemma 2.13. [27] Let \(a(t)\) be a nonnegative function locally integrable on \(a \leq t < b\) for some \(b \leq \infty\), and let \(g(t)\) be a nonnegative, nondecreasing continuous function defined on \(a \leq t < b\), such that \(g(t) \leq K\) for some constant K. Further let \(x(t)\) be a nonnegative locally integrable on \(a \leq t < b\) function satisfying $$ \left|x(t)\right| \leq a(t) + g(t) \int_{a}^{t} (t-s)^{\alpha -1} x(s) ds, \ \ t \in [a, b) $$ with some \(\alpha > 0\). Then $$ \left|x(t)\right| \leq a(t) + \int_{a}^{t} \left[ \sum_{n=1}^{\infty} \frac{(g(t)\Gamma (\alpha))^n}{ \Gamma(n\alpha)} (t-s)^{n\alpha -1}\right] a(s) ds, \ \ a \leq t < b. $$

Remark 2.14. Under the hypethesis of Lemma 2.13 let \(a(t)\) be a nondecreasing function on \([0, T)\). Then \(y(t) \leq a(t) E_{\alpha}(g(t)\Gamma(\alpha)t^{\alpha})\), where \(E_\alpha\) is the Mittag-Leffler function defined by $$ E_\alpha(z) = \sum_{k=0}^{\infty} \frac{z^k}{\Gamma(k\alpha+ 1)} , \ z \in C, \ Re(\alpha) > 0. $$

Lemma 2.15. [26] Let \(x \in PC_{1-\gamma}(J, R)\) satisfies the following inequality $$ \left|x(t)\right| \leq c_1 + c_2\int_{0}^{t} (t-s)^{\alpha-1} \left|x(t)\right| ds + \sum_{0< t_k< t} \psi_k \left|x(t_k)\right|, $$ where \(c_1\) is a nonnegative, continuous and nondecreasing function and \(c_2, \psi_i\) are constants. Then $$ \left|x(t)\right| \leq c_1\left(1 + \psi E_{\alpha} (c_2 \Gamma(\alpha) t^{\alpha})^{k} E_{\alpha} (c_2 \Gamma(\alpha) t^{\alpha} \right) \ for \ t \in (t_k. t_{k+1}], $$ where \(\psi = \sup \left\{\psi_k : k = 1, 2, 3,...,m\right\}\).

Theorem 2.16. [28](Schauder Fixed Point Theorem) Let \(E\) be a Banach space and \(Q\) be a nonempty bounded convex and closed subset of \(E\) and \(N : Q \rightarrow Q\) is compact, and continuous map. Then \(N\) has at least one fixed point in \(Q\).

Theorem 2.17. [28](Banach Fixed Point Theorem) Suppose \(Q\) be a non–empty closed subset of a Banach space \(E\). Then any contraction mapping \(N\) from \(Q\) into itself has a unique fixed point.

3. Main results

In this section, we study the main results on the existence of solution for Equation (1). We need the following Lemma to establish our main results.

Lemma 3.1. Let \(f : J \times R \rightarrow R\) be continuous. A function \(x\) is a solution of the fractional integral equation

\begin{align} \label{e2} x(t) = \left\{\begin{array}{lr} \frac{x_a}{\Gamma(\gamma)} (t-a)^{\gamma-1} - \lambda I_{a}^{\alpha_2} x(t) + I_{a}^{\alpha_1 + \alpha_2} f(t, x(t)) \quad if \quad t \in [a, t_1],\\ \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k < t}\psi_k (x(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} x(t_k) \right. \\ \displaystyle\left. + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, x(t_k))\right] - \lambda I_{t_k}^{\alpha_2} x(t) \\ + I_{t_k}^{\alpha_1 + \alpha_2} f(t, x(t)) \ \ \ \ if \ \ \ t \in (t_k, t_{k+1}],\end{array} \right. \end{align}
(5)
where \(k = 1,...,m\), if and only if \(x\) is a solution of the fractional initial value problem \begin{align*} D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda)x(t) &= f(t, x(t)), \\ \Delta I^{1-\gamma}x(t)|_{t=t_k} &= \psi_k (x(t_k)),\\ I^{1-\gamma} x(a) &= x_a. \end{align*}

Theorem 3.2. Assume that [H1] and [H2] are fulfilled. If \begin{align*} \rho = &\left[\frac{1}{\Gamma(\gamma)}\left( m L_{\psi} (b-a)^{\gamma-1} + \frac{ m \lambda B(\gamma, (1-\alpha_1)(1-\beta) + \alpha_2 \beta)}{\Gamma((1-\alpha_1)(1-\beta) + \alpha_2 \beta)} (b-a)^{1+\alpha_2} \right. \right. \\ & \left. \left. \quad + \frac{m L_f B(\gamma, 1+\beta(\alpha_1 + \alpha_2 -1))}{\Gamma(1+\beta(\alpha_1 + \alpha_2 -1))} (b-a)^{\alpha_1+ \alpha_2}\right) + \frac{\lambda B(\gamma, \alpha_2)}{\Gamma(\alpha_2)} (b-a)^{\alpha_2} \right. \\ & \left. \quad + \frac{ B(\gamma, \alpha_1+\alpha_2)}{\Gamma(\alpha_1+\alpha_2)} (b-a)^{\alpha_1+\alpha_2} \right] < 1, \end{align*} then the Equation (1) has a unique solution.

Proof. The proof is based on the Banach fixed point theorem. Define the operator \(N:PC_{1-\gamma}(J, R) \rightarrow PC_{1-\gamma}(J, R)\). The equivalent integral equation (5) which can be written in the operator form as follows

\begin{align}\label{e4} Nx(t) = \left\{\begin{array}{lr} \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k < t}\psi_k (x(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} x(t_k) \right. \\ \displaystyle\left. + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, x(t_k))\right] - \lambda I_{t_k}^{\alpha_2} x(t) \\ + I_{t_k}^{\alpha_1 + \alpha_2} f(t, x(t))\end{array} \right. \end{align}
(6)
First, we show that \(N\) maps \(B_r\) into \(B_r\). It is clear that \( N \) is well defined on \(PC_{1-\gamma}(J, R)\). Moreover for any \(x \in B_r\), we have \begin{align*} & \left|Nx(t)(t-t_k)^{1-\gamma}\right|\\ & \leq \frac{1}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k < t}\left|\psi_k (x(t_k))\right| + \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x(t_k)\right| \right. \\ &\displaystyle\left. \quad + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left|f(t_k, x(t_k))\right|\right] + \lambda I_{t_k}^{\alpha_2} \left|x(t)\right| + I_{t_k}^{\alpha_1 + \alpha_2} \left|f(t, x(t))\right|\\ & \leq \frac{1}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k < t}\left|\psi_k (x(t_k)) - \psi_k (0) \right| + \displaystyle \sum_{0< t_k< t}\left|\psi_k (0) \right| \right. \\ &\displaystyle\left. \quad + \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x(t_k)\right| \right. \\ &\displaystyle\left. \quad + \sum_{0 < t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left|f(t_k, x(t_k)) - f(t_k, 0) \right|\right.\\ &\displaystyle\left. \quad + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left| f(t_k, 0) \right|\right]\\ & \quad + (t-t_k)^{1-\gamma}\lambda I_{t_k}^{\alpha_2} \left|x(t)\right| + (t-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} \left|f(t, x(t)) - f(t, 0)\right| \\ &\displaystyle \quad + (t-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} \left|f(t, 0)\right|\\ & \leq \frac{1}{\Gamma(\gamma)} \left[x_a + mL_{\psi} (b-a)^{\gamma-1}\left\|x\right\|_{PC_{1-\gamma}} + mL_2 \right.\\ &\displaystyle\left. \quad + \frac{ m \lambda B(\gamma, (1-\alpha_1)(1-\beta) + \alpha_2 \beta)}{\Gamma((1-\alpha_1)(1-\beta) + \alpha_2 \beta)} (b-a)^{1+\alpha_2}\left\|x\right\|_{PC_{1-\gamma}}\right. \\ & \left. \quad + \frac{m L_f B(\gamma, 1+\beta(\alpha_1 + \alpha_2 -1))}{\Gamma(1+\beta(\alpha_1 + \alpha_2 -1))} (b-a)^{\alpha_1+ \alpha_2} \left\|x\right\|_{PC_{1-\gamma}}\right.\\ &\displaystyle\left. \quad + \frac{m l_1 }{\Gamma(2+(\alpha_1+\alpha_2-1)\beta)}(b-a)^{1+(\alpha_1+\alpha_2-1)\beta}\right]\\ & \quad + \frac{\lambda B(\gamma, \alpha_2)}{\Gamma(\alpha_2)} (b-a)^{\alpha_2} \left\|x\right\|_{PC_{1-\gamma}} + \frac{ B(\gamma, \alpha_1+\alpha_2)}{\Gamma(\alpha_1+\alpha_2)} (b-a)^{\alpha_1+\alpha_2} \left\|x\right\|_{PC_{1-\gamma}}\\ & +\frac{l_1}{\Gamma(\alpha_1 + \alpha_2 +1)}(b-a)^{\alpha_1 +\alpha_2 - \gamma + 1}\\ & \leq r. \end{align*} Consequently \(N\) maps \(B_r\) into itself. Let \(x, y \in PC_{1-\gamma}(J, R)\) and \(t \in J\), then we have \begin{align*} &\left|\left(Nx(t) - Ny(t)\right)(t-t_k)^{1-\gamma}\right| \\ & \leq \frac{1}{\Gamma(\gamma)} \left[ \displaystyle \sum_{0< t_k < t}\left|\psi_k (x(t_k)) - \psi_k (y(t_k)) \right| + \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x(t_k) - y(t_k)\right| \right. \\ &\displaystyle\left. \quad + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left|f(t_k, x(t_k)) - f(t_k, y(t_k)) \right| \right] \\ &+ (t-t_k)^{1-\gamma} \lambda I_{t_k}^{\alpha_2} \left|x(t) - y(t)\right| \\ & \quad + (t-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} \left|f(t, x(t)) - f(t, y(t))\right|\\ & \leq \frac{1}{\Gamma(\gamma)} \left[ \displaystyle \sum_{0< t_k< t}L_{\psi}\left| (x(t_k)) - (y(t_k)) \right| + \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x(t_k) - y(t_k)\right| \right. \\ &\displaystyle\left. \quad + \sum_{0< t_k< t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} L_{f}\left| x(t_k) - y(t_k) \right| \right] + (t-t_k)^{1-\gamma} \lambda I_{t_k}^{\alpha_2} \left|x(t) - y(t)\right| \\ & \quad + (t-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} L_{f} \left| x(t) - y(t)\right|. \end{align*} Thus \begin{align*} &\left\|Nx - Ny\right\|_{PC_{1-\gamma}} \\ & \leq \left[\frac{1}{\Gamma(\gamma)}\left( m L_{\psi} (b-a)^{\gamma-1} + \frac{ m \lambda B(\gamma, (1-\alpha_1)(1-\beta) + \alpha_2 \beta)}{\Gamma((1-\alpha_1)(1-\beta) + \alpha_2 \beta)} (b-a)^{1+\alpha_2} \right. \right. \\ & \left. \left. \quad + \frac{m L_f B(\gamma, 1+\beta(\alpha_1 + \alpha_2 -1))}{\Gamma(1+\beta(\alpha_1 + \alpha_2 -1))} (b-a)^{\alpha_1+ \alpha_2}\right) + \frac{\lambda B(\gamma, \alpha_2)}{\Gamma(\alpha_2)} (b-a)^{\alpha_2} \right. \\ & \left. \quad + \frac{ B(\gamma, \alpha_1+\alpha_2)}{\Gamma(\alpha_1+\alpha_2)} (b-a)^{\alpha_1+\alpha_2} \right] \left\|x-y\right\|_{PC_{1-\gamma}}\\ & = \rho \left\|x-y\right\|_{PC_{1-\gamma}}. \end{align*} This yields that \(N\) has unique fixed point which is solution of Equation (1).

Theorem 3.3. Assume that [H1] and [H2] are satisfied. Then, Equation (1) has at least one solution.

Proof. Let us denote \(f(t, 0) = l_1, \ \psi_k(0) = l_2\). Consider, $$B_r=\left\{ x \in PC_{1-\gamma}(J, R): \left\|x\right\|_{PC_{1-\gamma}} \leq r \right\}.$$ The operator form is given in Theorem 3.2. The proof is based on the Theorem 2.16. The proof is given in the following steps:

Step 1: The operator \(N : B_r \rightarrow B_r\) is continuous. Let \(x_n\) be a sequence such that \(x_n \rightarrow x\) in \(B_r\). Then for each \(t \in J\), we have \begin{align*} &\left|(N x_n)(t) (t-t_k)^{1-\gamma}- (Nx)(t)(t-t_k)^{1-\gamma}\right|\\ & \leq \frac{1}{\Gamma (\gamma)} \left[ \displaystyle \sum_{0< t_k < t}\left|\psi_k (x_n(t_k)) - \psi_k (x(t_k))\right| \right.\\ & \displaystyle\left. + \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x_n(t_k) - x(t_k)\right|\right. \\ & \displaystyle\left. + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left|f(t_k, x_n(t_k)) - f(t_k, x(t_k))\right|\right]\\ &+ (t-t_k)^{1-\gamma}\lambda I_{t_k}^{\alpha_2} \left|x_n(t) - x(t)\right|\\ &+ (t-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} \left|f(t, x_n(t)) - f(t, x(t))\right|. \end{align*} Since \(f\) is continuous, then by the Lebesgue Dominated Convergence Theorem which implies $$ \left\| (Nx_n)(t) - (Nx)(t) \right\|_{PC_{1-\gamma}} \rightarrow 0 \ \ \mbox{as} \ \ n \rightarrow \infty . $$

Step 2: The operator \(N\) is uniformly bounded.
By Thoerem 3.2, \(N(B_r)\) is uniformly bounded. It is clear that \(N(B_r) \subset B_r\) is bounded.

Step 3: The operator \(N\) is equicontinuous. Let \(t_1, t_2 \in J, t_1 > t_2\). Then, \begin{align*} &\left|(Nx)(t_1) (t_1 - t_k)^{1-\gamma} - (Nx)(t_2) (t_2 - t_k)^{1-\gamma}\right|\\ & \leq \frac{1}{\Gamma (\gamma)} \left[ \displaystyle \sum_{0 < t_k < t_1 - t_2}\left| \psi_k (x(t_k))\right| + \displaystyle \sum_{0 < t_k < t_1 - t_2} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left| x(t_k)\right|\right. \\ & \quad \displaystyle\left. + \sum_{0 < t_k < t_1 - t_2} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left| f(t_k, x(t_k))\right|\right] - (t_1-t_k)^{1-\gamma}\lambda I_{t_k}^{\alpha_2} \left| x(t_1)\right| \\ &+ (t_2-t_k)^{1-\gamma}\lambda I_{t_k}^{\alpha_2} \left| x(t_2)\right|\\ & \quad + (t_1-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} \left| f(t_1, x(t_1))\right| - (t_2-t_k)^{1-\gamma}I_{t_k}^{\alpha_1 + \alpha_2} \left| f(t_2, x(t_2))\right|. \end{align*} From Step 1- Lemma 3 combined with Arzela-Ascoli theorem, we conclude that \(N\) is continuous and compact. From the application of Theorem 2.16, we deduce that \(N\) has a fixed point \(x\) which is a solution of the problem Equation (1).

Remark 3.4. Let \(z\) is solution of the inequality (2), then \(z\) is a solution of the following integral inequality \begin{align*} & \left|z(t) - \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k < t}\psi_k (z(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} z(t_k) \right. \right.\\ &\left. \left.+ \displaystyle \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, z(t_k))\right] + \lambda I_{t_k}^{\alpha_2} z(t) - I_{t_k}^{\alpha_1 + \alpha_2} f(t, z(t))\right| \\ & \leq \epsilon \left[ \frac{m(b-a)^{\gamma-1}}{\Gamma(\gamma)} + \frac{m(b-a)^{\alpha_1 + \alpha_2}}{\Gamma(\gamma) \Gamma(2+\beta(\alpha_1 + \alpha_2 - 1))} + \frac{(b-a)^{\alpha_1 + \alpha_2}}{\Gamma(\alpha_1 + \alpha_2 + 1)}\right]. \end{align*}

Theorem 3.5. The assumptions [H1], [H2] and [H3] holds. Then Equation (1) is generalized UHR stable.

Proof. Let \(z\) be solution of (4) and by Theorem 3.2 there \(x\) is unique solution of the problem \begin{align*} %\label{e1} \begin{split} D^{\alpha_1, \beta} (D^{\alpha_2, \beta} + \lambda) x(t) &= f(t, x(t)), \ \ \ \ \ \ \ \ \ t \in J = [0,T],\\ \Delta I^{1-\gamma}x(t)|_{t=t_k} &= \psi_k (x(t_k)), \ \ \ \ \ \ \ \ k = 1, 2, ..., m,\\ I^{1-\gamma} x(a) &= I^{1-\gamma} z(a) = x_a. \end{split} \end{align*} Then we have \begin{eqnarray*} x(t) = \left\{\begin{array}{lr} \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k< t}\psi_k (x(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} x(t_k) \right. \\ \displaystyle\left. + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, x(t_k))\right] - \lambda I_{t_k}^{\alpha_2} x(t) \\ + I_{t_k}^{\alpha_1 + \alpha_2} f(t, x(t)).\end{array} \right. \end{eqnarray*} By differentiating inequality (4), for each \(t \in (t_k, t_{k+1}] \), we have \begin{align*} & \left|z(t) - \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)} \left[x_a + \displaystyle \sum_{0< t_k < t}\psi_k (z(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} z(t_k) \right. \right.\\ &\left. \left.+ \displaystyle \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, z(t_k))\right] + \lambda I_{t_k}^{\alpha_2} z(t) - I_{t_k}^{\alpha_1 + \alpha_2} f(t, z(t))\right| \\ & \leq \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)}\left[\sum_{0< t_k < t} g_k + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \varphi(t_k)\right] + I_{t_k}^{\alpha_1 + \alpha_2} \varphi(t)\\ & \leq \left[ \lambda_{\varphi} \left( \frac{m(b-a)^{\gamma-1}}{\Gamma(\gamma)} + 1\right) + \frac{m(b-a)^{\gamma-1}}{\Gamma(\gamma)}\right] \varphi(t). \end{align*} Hence for each \(t \in (t_k, t_{k+1}]\), it follows \begin{align*} & \left|z(t) - x(t)\right| \\ & \leq \left|z(t) - \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)}\left[x_a + \displaystyle \sum_{0< t_k < t}\psi_k (x(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} x(t_k) \right. \right.\\ &\quad \displaystyle\left. \left.+ \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, x(t_k))\right] + \lambda I_{t_k}^{\alpha_2} x(t) - I_{t_k}^{\alpha_1 + \alpha_2} f(t, x(t)) \right|\\ &\leq \left|z(t) - \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)}\left[x_a + \displaystyle \sum_{0< t_k < t}\psi_k (z(t_k)) - \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} z(t_k) \right. \right.\\ &\quad\displaystyle\left. \left.+ \sum_{0< t_k< t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} f(t_k, z(t_k))\right] + \lambda I_{t_k}^{\alpha_2} z(t) - I_{t_k}^{\alpha_1 + \alpha_2} f(t, z(t)) \right|\\ &\quad + \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)}\left( \displaystyle \sum_{0< t_k < t}\left|\psi_k (x(t_k)) - \psi_k (z(t_k))\right| \right.\\ &\quad\displaystyle\left. + \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x(t_k) - z(t_k)\right|\right. \\ & \quad\left. + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} \left|f(t_k, x(t_k)) - f(t_k, z(t_k))\right|\right) \\ &\quad + \lambda I_{t_k}^{\alpha_2} \left|x(t) - z(t)\right| + I_{t_k}^{\alpha_1 + \alpha_2} \left|f(t, x(t)) - f(t, z(t))\right|\\ &\leq \left[ \lambda_{\varphi} \left( \frac{m(b-a)^{\gamma-1}}{\Gamma(\gamma)} + 1\right) + \frac{m(b-a)^{\gamma-1}}{\Gamma(\gamma)}\right] \varphi(t)\\ &\quad + \frac{(t-t_k)^{\gamma-1}}{\Gamma (\gamma)}\left( \displaystyle \sum_{0< t_k < t}L_{\psi}\left|(x(t_k)) - (z(t_k))\right| \right. \\ & \quad\left.+ \displaystyle \sum_{0< t_k < t} \lambda I_{t_{k-1}}^{(1-\alpha_1)(1-\beta) + \alpha_2 \beta} \left|x(t_k) - z(t_k)\right| \right.\\ &\quad \displaystyle\left. + \sum_{0< t_k < t} I_{t_{k-1}}^{1+\beta(\alpha_1 + \alpha_2 -1)} L_f\left| x(t_k) - z(t_k)\right|\right)\\ & \quad+ \lambda I_{t_k}^{\alpha_2} \left|x(t) - z(t)\right| + I_{t_k}^{\alpha_1 + \alpha_2} L_f\left| x(t) - z(t)\right| \end{align*} By Lemma 2.15, there exists a constant \(\kappa > 0\) independent of \(\lambda_{\varphi} \varphi(t)\) such that $$ \left|z(t) - x(t)\right| \leq \kappa \varphi(t). $$ Thus, Equation (1) is generalized UHR stable.

Competing Interests

The authors declare that they have no competing interests.

References

  1. Hilfer, R. (1999). Applications of Eractional Calculus in Physics, World scientific, Singapore.
  2. Kilbas, A. A. A., Srivastava, H. M., & Trujillo, J. J. (2006). Theory and applications of fractional differential equations (Vol. 204). Elsevier Science Limited.[Google Scholor]
  3. Podlubny, I.(1999) . Fractional Differential equation, Academic Press, San Diego.
  4. Furati, K. M., & Kassim, M. D. (2012). Existence and uniqueness for a problem involving Hilfer fractional derivative. Computers & Mathematics with Applications, 64(6), 1616-1626. [Google Scholor]
  5. Hilfer, R., Luchko, Y., & Tomovski, Z. (2009). Operational method for the solution of fractional differential equations with generalized Riemann-Liouville fractional derivatives. Fract. Calc. Appl. Anal., 12(3), 299-318. [Google Scholor]
  6. Vivek, D., Kanagarajan, K., & Sivasundaram, S. (2016). Dynamics and stability of pantograph equations via Hilfer fractional derivative. Nonlinear Studies, 23(4), 685-698. [Google Scholor]
  7. Vivek, D., Kanagarajan, K., & Elsayed, E. M. (2018). Some existence and stability results for Hilfer-fractional implicit differential equations with nonlocal conditions. Mediterranean Journal of Mathematics, 15(1), 15.[Google Scholor]
  8. Vivek, D., Kanagarajan, K., & Sivasundaram, S. (2017). Dynamics and stability results for Hilfer fractional type thermistor problem. Fractal and Fractional, 1(1), 1-14. [Google Scholor]
  9. Vivek, D., Kanagarajan, K., & Sivasundaram, S. (2017). Theory and analysis of nonlinear neutral pantograph equations via Hilfer fractional derivative. Nonlinear Studies, 24(3), 699-712.[Google Scholor]
  10. Wang, J., & Zhang, Y. (2015). Nonlocal initial value problems for differential equations with Hilfer fractional derivative. Applied Mathematics and Computation, 266, 850-859. [Google Scholor]
  11. Beck, C., & Roepstorff, G. (1987). From dynamical systems to the Langevin equation. Physica A: Statistical Mechanics and its Applications, 145(1-2), 1-14. [Google Scholor]
  12. Fa, K. S. (2007). Fractional Langevin equation and Riemann-Liouville fractional derivative. The European Physical Journal E, 24(2), 139-143.[Google Scholor]
  13. Chen, A., & Chen, Y. (2011). Existence of solutions to nonlinear Langevin equation involving two fractional orders with boundary value conditions. Boundary Value Problems, 2011(1), 516481.[Google Scholor]
  14. Ahmad, B., Nieto, J. J., Alsaedi, A., & El-Shahed, M. (2012). A study of nonlinear Langevin equation involving two fractional orders in different intervals. Nonlinear Analysis: Real World Applications, 13(2), 599-606.[Google Scholor]
  15. Harikrishnan, S., Kanagarajan, K., & Elsayed, E. M. (2018) Existence and stability results for langevin equations with Hilfer fractional derivative, Res. Fixed Point Theory Appl., 10 pages.
  16. Baghani, O. (2017). On fractional Langevin equation involving two fractional orders. Communications in Nonlinear Science and Numerical Simulation, 42, 675-681. [Google Scholor]
  17. Yu, T., Deng, K., & Luo, M. (2014). Existence and uniqueness of solutions of initial value problems for nonlinear langevin equation involving two fractional orders. Communications in Nonlinear Science and Numerical Simulation, 19(6), 1661-1668. [Google Scholor]
  18. Lakshmikantham, V., & Simeonov, P. S. (1989). Theory of impulsive differential equations (Vol. 6). World scientific.
  19. Liu, X., & Li, Y. (2014). Some antiperiodic boundary value problem for nonlinear fractional impulsive differential equations. In Abstract and Applied Analysis (Vol. 2014). Hindawi.[Google Scholor]
  20. Luo, Z., & Shen, J. (2006). Global existence results for impusive functional differential equation. J. Math. Anal. Appl., 323(1), 644-653.
  21. Ibrahim, R. W. (2012). Ulam-Hyers stability for Cauchy fractional differential equation in the unit disk. In Abstract and Applied Analysis (Vol. 2012). Hindawi. [Google Scholor]
  22. Ibrahim, R. W. (2012). Generalized Ulam–Hyers stability for fractional differential equations. International Journal of mathematics, 23(05), 1250056. [Google Scholor]
  23. Rus, I. A. (2010). Ulam stabilities of ordinary differential equations in a Banach space. Carpathian journal of Mathematics, 103-107. [Google Scholor]
  24. Wang, J., Lv, L., & Zhou, Y. (2011). Ulam stability and data dependence for fractional differential equations with Caputo derivative. Electronic Journal of Qualitative Theory of Differential Equations, 2011(63), 1-10. [Google Scholor]
  25. Wang, J., & Li, X. (2015). Ulam–Hyers stability of fractional Langevin equations. Applied Mathematics and Computation, 258, 72-83.[Google Scholor]
  26. Wang, J., Zhou, Y., & Fec, M. (2012). Nonlinear impulsive problems for fractional differential equations and Ulam stability. Computers & Mathematics with Applications, 64(10), 3389-3405. [Google Scholor]
  27. Ye, H., Gao, J., & Ding, Y. (2007). A generalized Gronwall inequality and its application to a fractional differential equation. Journal of Mathematical Analysis and Applications, 328(2), 1075-1081. [Google Scholor]
  28. Granas, A., & Dugundji, J. (2013). Fixed point theory. Springer Science & Business Media.[Google Scholor]
]]>
Coefficient estimates of some classes of rational functions https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/coefficient-estimates-of-some-classes-of-rational-functions/ Fri, 14 Dec 2018 00:57:48 +0000 https://old.pisrt.org/?p=1637
OMA-Vol. 2 (2018), Issue 2, pp. 114–128 | Open Access Full-Text PDF
Hanan Darwish, Suliman Sowileh, Abd AL-Monem Lashin
Abstract:Let \(\mathcal{A}\) be the class of analytic and univalent functions in the open unit disc \(\Delta\) normalized such that \(f(0)=0=f^{\prime }(0)-1.\) In this paper, for \(\psi \in \mathcal{A}\) of the form \(\frac{z}{f(z)}, f(z)=1+\sum\limits_{n=1}^{\infty }a_{_{n}}z^{n}\) and \(0\leq \alpha \leq 1,\) we introduce and investigate interesting subclasses \(\mathcal{H}_{\sigma }(\phi ), \;S_{\sigma }(\alpha ,\phi ), \; M_{\sigma }(\alpha ,\phi ),\) \( \Im _{\alpha} (\alpha ,\phi )\) and \(\beta _{\alpha}(\lambda ,\phi ) \left( \lambda \geq 0 \right)\) of analytic and bi-univalent Ma-Minda starlike and convex functions. Furthermore, we find estimates on the coefficients \(\left\vert a_{1}\right\vert\) and \(\left\vert a_{2}\right\vert\) for functions in these classess. Several related classes of functions are also considered.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Coefficient Estimates of some Classes of Rational Functions

Hanan Darwish, Suliman Sowileh, Abd AL-Monem Lashin\(^1\)
Department of Mathematics Faculty of Science Mansoura, University Mansoura, 35516, Egypt.; (HD & S.S) & A.A.L)
\(^{1}\)Corresponding Author;  s_soileh@yahoo.com

Copyright © 2018 Hanan Darwish, Suliman Sowileh, Abd AL-Monem Lashin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Let \(\mathcal{A}\) be the class of analytic and univalent functions in the open unit disc \(\Delta\) normalized such that \(f(0)=0=f^{\prime }(0)-1.\) In this paper, for \(\psi \in \mathcal{A}\) of the form \(\frac{z}{f(z)}, f(z)=1+\sum\limits_{n=1}^{\infty }a_{_{n}}z^{n}\) and \(0\leq \alpha \leq 1,\) we introduce and investigate interesting subclasses \(\mathcal{H}_{\sigma }(\phi ), \;S_{\sigma }(\alpha ,\phi ), \; M_{\sigma }(\alpha ,\phi ),\) \( \Im _{\alpha} (\alpha ,\phi )\) and \(\beta _{\alpha}(\lambda ,\phi ) \left( \lambda \geq 0 \right)\) of analytic and bi-univalent Ma-Minda starlike and convex functions. Furthermore, we find estimates on the coefficients \(\left\vert a_{1}\right\vert\) and \(\left\vert a_{2}\right\vert\) for functions in these classess. Several related classes of functions are also considered.

Keywords:

Rational functions; Bi-starlike functions; Bi-convex functions; Subordination.

1. Introduction

Let \(\mathcal{A\ }\) be the class of all analytic functions \(f\) in the open unit disk \(\Delta =\{z\in \mathbb{C} :\left\vert z\right\vert < 1\}\) and normalized by the conditions \(f(0)=0\) and \(f^{\prime }(0)=1.\) Also, by \(\wp\) we shall denote the subclass of all functions in \(\mathcal{A}\) which are univalent in \(\Delta.\) Let \(P\) denote the class of functions \(p(z)\) of the form \begin{equation*} p(z)=1+\sum\limits_{n=1}^{\infty }c_{_{n}}z^{n} \end{equation*} which are analytic in \(\Delta\) such that \begin{equation*} p(0)=1\text{and Re}\left\{ p(z)\right\} >0\ \ \ \left( z\in \Delta \right) . \end{equation*} If the functions \(f\) and \(g\) are analytic in \(\Delta ,\) then \(f\) is said to be subordinate to \(g,\) written \(f(z)\prec g(z),\) provided there is an analytic function \(w(z)\) defined on \(\Delta\) with \(w(0)=0\) and \(\left\vert w(z)\right\vert < 1\) so that \(f(z)=g(w(z)).\) Furthermore , if the function \(g(z)\) is univalent in \(\mathbb{\triangle },\) then we have the following equivalence (see for details, [1, 2, 3, 4, 5,6, 7, 8, 9, 10,11, 12]): \begin{equation*} f(z)\prec g(z)\Leftrightarrow f(0)=g(0)\ \textrm{and}\ f(\mathbb{\triangle })\subset g(\mathbb{\triangle }). \end{equation*} Some of the important and well-investigated subclasses of the univalent function class \(\wp\) include (for example) the class \(S(\alpha )\) of starlike functions of order \(\alpha\) in \(\Delta\) and the class \( C(\alpha )\) of convex functions of order \(\alpha\) in \(\Delta\). By definition, we have
\begin{equation} S(\alpha )=\left\{ f:f\in \wp \ \ \textrm{and}\ \ \textrm{Re}\frac{zf^{\prime }(z)}{ f(z)}>\alpha \ \ \ \ (z\in \Delta ,\ 0\leq \alpha < 1)\right\} \label{1.a} \end{equation}
(1)
and
\begin{equation} C(\alpha )=\left\{ f:f\in \wp \ \ \textrm{and}\ \ \textrm{Re}\left( 1+\frac{zf^{\prime \prime }(z)}{f^{\prime }(z)}\right) >\alpha \ \ \ \ (z\in \Delta ,\ 0\leq \alpha < 1)\right\} . \label{1.b} \end{equation}
(2)
It readily follows from the definitions (1) and (2) that
\begin{equation} f(z)\in C(\alpha )\iff zf^{\prime }(z)\in S(\alpha ). \label{1.c} \end{equation}
(3)
It is well known that for each \(f\in \wp ,\) the koebe one-quarter theorem [13] ensures the image of \(\Delta\) under \(f\) contains a disk of radius \(1/4.\) Thus every univalent function \(f\in \wp \) has an inverse \(f^{-1}\) which satisfies \begin{equation*} f^{-1}(f(z))=z\ (\left\vert z\right\vert < 1) \end{equation*} and \begin{equation*} f(f^{-1}(w))=w,\ \ \ (\left\vert w\right\vert < r_{0}(f),\ r_{0}(f)\geq 1/4). \end{equation*} In fact, the inverse function \(g=f^{-1}\) is defined by \begin{equation*} g(w)=f^{-1}(w)=w-a_{2}w^{2}+(2a_{2}^{2}-a_{3})w^{3}-(5a_{2}^{2}-5a_{2}a_{3}+a_{4})w^{4}+.... \end{equation*} A function \(f\in \mathcal{A}\) is said to bi-univalent in \(\Delta\) if both \(f\) and \(f^{-1}\) are univalent in \(\Delta.\) Let \(\sigma\) denote the class of bi-univalent functions defined in the unit disk \(\Delta\) and let \( \phi \in P\) and \(\phi (\Delta )\) is symmetric with respect to the the real axis, such a function has a Taylor series of the form:
\begin{equation} \phi (z)=1+B_{1}z+B_{2}z^{2}+B_{3}z^{3}+...\left( B_{1}>0\right) . \label{2.1} \end{equation}
(4)
In [14], the authors introduced the class \(S(\phi)\) of the so-called Ma and Minda starlike functions and the class \(C(\phi )\) of Ma and Minda convex functions, unifying several previously studied classes related to those of starlike and convex functions. The class \(S(\phi)\) consists of all the functions \(f\in \mathcal{A}\) satisfying subordination \(\dfrac{zf^{\prime }(z)}{f(z)}\prec \phi (z),\) whereas \(C(\phi )\) is formed with functions \(f\in \mathcal{A}\) for which the subordination \(1+\) \(\dfrac{ zf^{\prime \prime }(z)}{f^{\prime }(z)}\prec \phi (z)\) holds. Lewin [15] investigated the class \(\sigma\) and showed that \(\left\vert a_{2}\right\vert < 1.51\) for function \(f(z)=z+\sum\limits_{n=2}^{\infty }a_{_{n}}z^{n}\in \sigma\). Subsequently, Brannan and Clunie [16] conjectured that \(\left\vert a_{2}\right\vert < \sqrt{2}.\) Netanyahu [17], on the other hand, showed that max \(\left\vert a_{2}\right\vert =4/3\) if \(f(z)\in \sigma .\) Brannan and Taha [18] and Taha[19] introduced certain subclasses of bi-univalent functions, similar to the familiar subclasses of univalent functions consisting of strongly starlike and convex functions, they introduced bi-starlike functions and bi-convex functions and found non-sharp estimates on the first two Taylor-Maclaurin coefficients \(\left\vert a_{2}\right\vert\) and \(\left\vert a_{3}\right\vert .\) Recently, many authors investigated bounds for various subclasses of bi-univalent functions (see [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]). In [34], Mitrinovic essentially investigated certain geometric properties of functions \(\psi\) of the form
\begin{equation} \psi (z)=\frac{z}{f(z)},\ \ \ f(z)=1+\sum\limits_{n=1}^{\infty }a_{_{n}}z^{n}. \label{1.1} \end{equation}
(5)

In [35], Reade et al. derived coefficient conditions that guarantee the univalence, starlikeness or convexity of rational functions of the form (5), these results have been improved and generalized in [36]. In this paper, estimates on the initial coefficients for bi-starlike of Ma-Minda type and bi-convex of Ma-Minda type of rational form (5) are obtained. Several related classes are also considered.

In order to derive our main results, we require the following lemma.

Lemma 1.1. (see 37) If \(p(z)\in P\), then

\begin{equation} \left\vert c_{n}\right\vert \leq 2\ \ \ \ \left( n\in \mathbb{N} =\left\{ 1,2,...\right\} \right) . \label{1.2} \end{equation}
(6)

2. Coefficients estimates

A function \(\psi (z)\in \mathcal{A}\) with Re \(\left( \psi ^{\prime }(z)\right) >0\) is known to be univalent. This motivates the following class of functions.

Definition 2.1. A function \(\psi \in \sigma\) given by (5) is said to be in the class \(\mathcal{H}_{\sigma }(\phi )\) if the following conditions are satisfied: \begin{equation*} \psi ^{\prime }(z)\prec \phi (z)\left( z\in \Delta \right) \ \text{and} \ g^{\prime }(w)\prec \phi (w)\left( w\in \Delta \right) ,\ \ \ \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

If we set \begin{equation*} \phi (z)=\left( \frac{1+z}{1-z}\right) ^{\gamma }=1+2\gamma z+2\gamma ^{2}z^{2}+...\left( 0< \gamma \leq 1,\ z\in \Delta \right) \end{equation*} in Definition 2.1 of the bi-univalent function class \( \mathcal{H}_{\sigma }(\phi )\) we obtain a new class \(\mathcal{H}_{\sigma }(\gamma )\) given by Definition 2.2 below.

Definition 2.2. For \(0< \gamma \leq 1,\) a function \(\psi \in \sigma\) given by (5) is said to be in the class \(\mathcal{H}_{\sigma }(\gamma )\) if the following conditions are satisfied: \begin{equation*} \psi ^{\prime }(z)\prec \left( \frac{1+z}{1-z}\right) ^{\gamma }\left( z\in \Delta \right) \ \ \text{and}\ \ g^{\prime }(w)\prec \left( \frac{1+w}{1-w}% \right) ^{\gamma }\left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

If we set \begin{equation*} \phi (z)=\frac{1+(1-2\nu )z}{1-z}=1+2(1-\nu )z+2(1-\nu )z^{2}+...\left( 0< \nu \leq 1,\ z\in \Delta \right) \end{equation*} in Definition 2.1 of the bi-univalent function class \( \mathcal{H}_{\sigma }(\phi )\) we obtain, a new class \(\mathcal{H}_{\sigma }(\nu )\) given by Definition 2.3 below.

Definition 2.3. For \(0< \nu \leq 1,\) a function \(\psi \in \sigma\) given by (5) is said to be in the class \(\mathcal{H}_{\sigma }(\nu )\) if the following conditions hold true: \begin{equation*} \psi ^{\prime }(z)\prec \frac{1+(1-2\nu )z}{1-z}\left( z\in \Delta \right) \ \text{and} \ g^{\prime }(w)\prec \frac{1+(1-2\nu )w}{1-w}\left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

Theorem 2.4. Let \(\psi (z)\in \mathcal{H}_{\sigma }(\phi )\) be of the form (5). Then

\begin{equation} \left\vert a_{1}\right\vert \leq \frac{B_{1}\sqrt{B_{1}}}{\sqrt{\left\vert 3B_{1}^{2}-4B_{2}+4B_{1}\right\vert }}\ \ \ \text{and }\left\vert a_{2}\right\vert \leq \frac{1}{3}B_{1}. \label{2.2} \end{equation}
(7)

Proof. Let \(\psi (z)\in \mathcal{H}_{\sigma }(\phi )\) and \(g=\psi ^{-1}.\) Then there exist two functions \(u\) and \(v,\) analytic in \(\Delta,\) with \noindent \(u(0)=v(0)=0,\ \ \left\vert u(z)\right\vert < 1\) and \( \left\vert v(w)\right\vert < 1,\ z,w\in \Delta ,\) such that

\begin{equation} \psi ^{\prime }(z)=\phi (u(z)) \ \text{and} \ g^{\prime }(w)=\phi (v(w)). \label{2.3} \end{equation}
(8)
Next, define the functions \(p_{1}\) and \(p_{2}\) by \begin{equation*} p_{1}(z)=\frac{1+u(z)}{1-u(z)}=1+c_{1}z+c_{2}z^{2}+...\ \text{and}\ p_{2}(w)=\frac{1+v(w)}{1-v(w)}=1+b_{1}w+b_{2}^{2}w^{2}+..., \end{equation*} or, equivalently,
\begin{equation} u(z)=\frac{p_{1}(z)-1}{p_{1}(z)+1}=\frac{1}{2}\left[ c_{1}z+\left( c_{2}- \frac{c_{1}^{2}}{2}\right) z^{2}+...\right] , \label{2.4} \end{equation}
(9)
and
\begin{equation} v(w)=\frac{p_{2}(w)-1}{p_{2}(w)+1}=\frac{1}{2}\left[ b_{1}w+\left( b_{2}- \frac{b_{1}^{2}}{2}\right) w^{2}+...\right] . \label{2.5} \end{equation}
(7)
Then \(p_{1}\) and \(p_{2}\) analytic in \(\Delta\) with \( p_{1}(0)=1=p_{2}(0).\) Since \(u,v:\Delta \longrightarrow \Delta ,\) the functions \(p_{1}\) and \(p_{2}\) have a positive real part in \(\Delta,\) and \( \left\vert b_{i}\right\vert \leq 2\) and \(\left\vert c_{i}\right\vert \leq 2.\) Clearly, upon substituting from (9) and (10) into (8), if we make use of (4), we find that
\begin{equation} \psi ^{\prime }(z)=\phi (\frac{p_{1}(z)-1}{p_{1}(z)+1})=1+\frac{1}{2}% B_{1}c_{1}z+\left[ \frac{1}{2}B_{1}\left( c_{2}-\frac{c_{1}^{2}}{2}\right) +% \frac{1}{4}B_{2}c_{1}^{2}\right] z^{2}+..., \label{2.6} \end{equation}
(11)
and
\begin{equation} g^{\prime }(w)=\phi (\frac{p_{2}(w)-1}{p_{2}(w)+1})=1+\frac{1}{2}% B_{1}b_{1}w+\left[ \frac{1}{2}B_{1}\left( b_{2}-\frac{b_{1}^{2}}{2}\right) +% \frac{1}{4}B_{2}b_{1}^{2}\right] w^{2}+...\ .. \label{2.61} \end{equation}
(12)
Since \(\psi \in \sigma \) has the Maclaurin's series given by
\begin{equation} \psi (z)=z-a_{1}z^{2}+(a_{1}^{2}-a_{2})z^{3}+..., \label{2.9} \end{equation}
(13)
a computation shows that its inverse \(g=\psi ^{-1}\) has the expansion
\begin{equation} g(w)=\psi ^{-1}(w)=w+a_{1}w^{2}+(a_{1}^{2}+a_{2})w^{3}+...\ . \label{2.91} \end{equation}
(14)
Using (13) and (14) in (11) and (12) respectively, we get
\begin{equation} -2a_{1}=\frac{1}{2}B_{1}c_{1} \label{2.10} \end{equation}
(15)
\begin{equation} 3(a_{1}^{2}-a_{2})=\frac{1}{2}B_{1}(c_{2}-\frac{c_{1}^{2}}{2})+\frac{1}{4}% B_{2}c_{1}^{2}, \label{2.11} \end{equation}
(16)
\begin{equation} 2a_{1}=\frac{1}{2}B_{1}b_{1} \label{2.12} \end{equation}
(17)
and
\begin{equation} 3(a_{1}^{2}+a_{2})=\frac{1}{2}B_{1}(b_{2}-\frac{b_{1}^{2}}{2})+\frac{1}{4}% B_{2}b_{1}^{2}. \label{2.13} \end{equation}
(18)
From (15) and (17), we have
\begin{equation} c_{1}=-b_{1}. \label{2.14} \end{equation}
(19)
Adding (16) and (18) and then using (15) and (19), we get \begin{equation*} a_{1}^{2}=\frac{B_{1}^{3}(c_{2}+b_{2})}{4(3B_{1}^{2}-4B_{2}+4B_{1})}, \end{equation*} and now, by applying Lemma 1.1 for the coefficients \(b_{2}\) and \(c_{2},\) the last equation gives the bound of \(\left\vert a_{1}\right\vert\) from (7). By subtracting (18) from (16), further computations using (19) lead to \begin{equation*} a_{2}=\frac{1}{12}B_{1}(b_{2}-c_{2}). \end{equation*} The bound of \(\left\vert a_{2}\right\vert ,\) as asserted in (7), is now a consequence of Lemma 1.1, and this completes our proof.

Using the parameter setting of Definition 2.2 in Theorem 2.4, we get the following corollary.

Corollary 2.5. For \(0< \gamma \leq 1,\) let the function \(\psi \in \mathcal{H} _{\sigma }(\gamma )\) be of the form (5). Then \begin{equation*} \left\vert a_{1}\right\vert \leq \frac{\sqrt{2}\gamma }{\sqrt{\gamma +2}}\ \ \ \text{and}\ \left\vert a_{2}\right\vert \leq \frac{2}{3}\gamma . \end{equation*}

Using the parameter setting of Definition 2.3 in Theorem 2.1, we get the following corollary.

Corollary 2.6. For \(0< \nu \leq 1,\) let the function \(\psi \in \mathcal{H} _{\sigma }(\nu )\) be given by (5). Then \begin{equation*} \left\vert a_{1}\right\vert \leq \sqrt{\frac{2}{3}\left( 1-\nu \right) }\ \ \ \text{and }\left\vert a_{2}\right\vert \leq \frac{2}{3}\left( 1-\nu \right) . \end{equation*}

Definition 2.7. A function \(\psi \in \sigma\) is given by (5) is said to be in the class \(S_{\sigma }(\alpha ,\phi )\) if the following subordinations hold: \begin{equation*} \frac{z\psi ^{\prime }(z)}{\psi (z)}+\frac{\alpha z^{2}\psi ^{\prime \prime }(z)}{\psi (z)}\prec \phi (z)\left( z\in \Delta \right) \ \ \ \text{and }% \frac{wg^{\prime }(w)}{g(w)}+\frac{\alpha w^{2}g^{\prime \prime }(w)}{g(w)}% \prec \phi (w)\left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

If we set \begin{equation*} \phi (z)=\left( \frac{1+z}{1-z}\right) ^{\gamma }=1+2\gamma z+2 \gamma ^{2}z^{2}+...\left(0< \gamma \leq 1,\ z\in \Delta \right) \end{equation*} in Definition 2.4 of the bi-univalent function class \( S_{\sigma }(\alpha ,\phi ),\) we obtain a new class \(S_{\sigma }(\alpha ,\gamma )\) given by Definition 2.5 below.

Definition 2.8. For \(0\leq \alpha \leq 1\) and \(0< \gamma \leq 1,\) a function \( \psi \in \sigma \) given by (5) is said to be in the class \( S_{\sigma }(\alpha ,\gamma )\) if the following subordinations hold: \begin{equation*} \frac{z\psi ^{\prime }(z)}{\psi (z)}+\frac{\alpha z^{2}\psi ^{\prime \prime }(z)}{\psi (z)}\prec \left( \frac{1+z}{1-z}\right) ^{\gamma }\left( z\in \Delta \right) , \end{equation*} and \begin{equation*} \frac{wg^{\prime }(w)}{g(w)}+\frac{\alpha w^{2}g^{\prime \prime }(w)}{g(w)}% \prec \left( \frac{1+w}{1-w}\right) ^{\gamma }\left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

If we set \begin{equation*} \phi (z)=\frac{1+(1-2\nu )z}{1-z}=1+2(1-\nu )z+2(1-\nu )z^{2}+...\left( 0< \nu \leq 1,\ z\in \Delta \right) \end{equation*} in Definition 2.4 of the bi-univalent function class \( S_{\sigma }(\alpha ,\phi )\) we obtain a new class \(S_{\sigma }(\alpha ,\nu )\) given by Definition 2.6 below.

Definition 2.9 For \(0\leq \alpha \leq 1\) and \(0< \nu \leq 1,\) a function \( \psi \in \sigma \) given by (5) is said to be in the class \( S_{\sigma }(\alpha ,\nu )\) if the following subordinations hold: \begin{equation*} \frac{z\psi ^{\prime }(z)}{\psi (z)}+\frac{\alpha z^{2}\psi ^{\prime \prime }(z)}{\psi (z)}\prec \frac{1+(1-2\nu )z}{1-z}\left( z\in \Delta \right) \end{equation*} and \begin{equation*} \frac{wg^{\prime }(w)}{g(w)}+\frac{\alpha w^{2}g^{\prime \prime }(w)}{g(w)}% \prec \frac{1+(1-2\nu )w}{1-w}\left( w\in \Delta \right) , \end{equation*} where \(g(w)=\psi ^{-1}(w).\)

Note that \(S(\phi )=S_{\sigma }(0,\phi ).\) For functions in the class \(S_{\sigma }(\alpha ,\phi ),\) the following coefficient estimates are obtained,

Theorem 2.10 Let \(\psi (z)\in S_{\sigma }(\alpha ,\phi )\) be of the form (5). Then

\begin{equation} \left\vert a_{1}\right\vert \leq \frac{B_{1}\sqrt{B_{1}}}{\sqrt{\left\vert B_{1}^{2}(1+4\alpha )+(B_{1}-B_{2})(1+2\alpha )^{2}\right\vert }},\ \ \ \label{2.15} \end{equation}
(20)
and
\begin{equation} \left\vert a_{2}\right\vert \leq \frac{B_{1}}{1+3\alpha }. \label{2.16} \end{equation}
(21)

Proof. Let \(\psi \in S_{\sigma }(\alpha ,\phi ),\) there are two Schwarz functions \(u\) and \(v\) defined by (9) and (10) respectively, such that

\begin{equation} \frac{z\psi ^{\prime }(z)}{\psi (z)}+\frac{\alpha z^{2}\psi ^{\prime \prime }(z)}{\psi (z)}=\phi (u(z))\ \ \ \text{and }\frac{wg^{\prime }(w)}{g(w)}+% \frac{\alpha w^{2}g^{\prime \prime }(w)}{g(w)}=\phi (v(w)),\ \ \ \left( g=\psi ^{-1}\right) . \label{2.17} \end{equation}
(22)
Since \begin{equation*} \frac{z\psi ^{\prime }(z)}{\psi (z)}+\frac{\alpha z^{2}\psi ^{\prime \prime }(z)}{\psi (z)}=1-\left( 1+2\alpha \right) a_{1}z+\left[ \left( 1+4\alpha \right) a_{1}^{2}-2\left( 1+3\alpha \right) a_{2}\right] z^{2}+... \end{equation*} and \begin{equation*} \frac{wg^{\prime }(w)}{g(w)}+\frac{\alpha w^{2}g^{\prime \prime }(w)}{g(w)}% =1+\left( 1+2\alpha \right) a_{1}w+\left[ \left( 1+4\alpha \right) a_{1}^{2}+2\left( 1+3\alpha \right) a_{2}\right] w^{2}+..., \end{equation*} then (11), (12) and (22) yields
\begin{equation} -(1+2\alpha )a_{1}=\frac{1}{2}B_{1}c_{1} \label{2.18} \end{equation}
(23)
\begin{equation} (1+4\alpha )a_{1}^{2}-2(1+3\alpha )a_{2}=\frac{1}{2}B_{1}(c_{2}-\frac{c_{1}^{2}}{2})+\frac{1}{4}B_{2}c_{1}^{2}, \label{2.19} \end{equation}
(24)
\begin{equation} (1+2\alpha )a_{1}=\frac{1}{2}B_{1}b_{1} \label{2.20} \end{equation}
(25)
and
\begin{equation} (1+4\alpha )a_{1}^{2}+2(1+3\alpha )a_{2}=\frac{1}{2}B_{1}(b_{2}-\frac{b_{1}^{2}}{2})+\frac{1}{4}B_{2}b_{1}^{2}. \label{2.21} \end{equation}
(26)
From (23) and (25), we get
\begin{equation} c_{1}=-b_{1}, \label{2.22} \end{equation}
(27)
and after some further calculations using (24)-(27) we find \begin{equation*} a_{1}^{2}=\frac{B_{1}^{3}(c_{2}+b_{2})}{4\left[ B_{1}^{2}(1+4\alpha )+(B_{1}-B_{2})(1+2\alpha )^{2}\right] }, \end{equation*} and \begin{equation*} a_{2}=\frac{B_{1}(b_{2}-c_{2})}{4(1+3\alpha )}. \end{equation*} Applying Lemma 1.1, the estimates in (20) and (21) follow.

For \(\alpha =0,\) Theorem 2.2 readily yields the following coefficient estimates for Ma-Minda bi-starlike functions.

Corollary 2.11. Let \(\psi\) given by (7) be in the class \(S(\phi ).\) Then \begin{equation*} \left\vert a_{1}\right\vert \leq \frac{B_{1}\sqrt{B_{1}}}{\sqrt{\left\vert B_{1}^{2}+B_{1}-B_{2}\right\vert }},\ \ \ and\ \ \ \left\vert a_{2}\right\vert \leq B_{1}. \end{equation*}

Using the parameter setting of Definition 2.8 in Theorem 2.10, we get the following corollary.

Corollary 2.12. For \(0\leq \alpha \leq 1\) and \(0< \gamma \leq 1,\) let the function \(\psi \in S_{\sigma }(\alpha ,\gamma )\) be of the form (5). Then \begin{equation*} \left\vert a_{1}\right\vert \leq \frac{2\gamma }{\sqrt{\left( 1+2\alpha \right) ^{2}+\gamma \left[ 1+4\alpha -4\alpha ^{2}\right] }}\ \ \ \text{and}% \ ~\ \left\vert a_{2}\right\vert \leq \frac{2\gamma }{1+3\alpha }. \end{equation*}

Using the parameter setting of Definition 2.9 in Theorem 2.10 we get the following corollary.

Corollary 2.13. For \(0\leq \alpha \leq 1\) and \(0< \nu \leq 1,\) let the function \(\psi \in S_{\sigma }(\alpha ,\nu )\) be of the form (5). Then \begin{equation*} \left\vert a_{1}\right\vert \leq \sqrt{\frac{2\left( 1-\nu \right) }{ 1+4\alpha }}\ \ \ \text{and }\left\vert a_{2}\right\vert \leq \frac{2\left( 1-\nu \right) }{1+3\alpha }. \end{equation*}

Definition 2.14. A function \(\psi \in \sigma\) given by (5) belongs to the class \(M_{\sigma }(\alpha ,\phi )\) \(\left( 0\leq \alpha \leq 1\right),\) if the following subordinations hold: \begin{equation*} (1-\alpha )\frac{z\psi ^{\prime }(z)}{\psi (z)}+\alpha (1+\frac{z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)})\prec \phi (z)\left( z\in \Delta \right) , \end{equation*} and \begin{equation*} (1-\alpha )\frac{wg^{\prime }(w)}{g(w)}+\alpha (1+\frac{wg^{\prime \prime }(w)}{g^{\prime }(w)})\prec \phi (w),\left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

If we set \begin{equation*} \phi (z)=\left( \frac{1+z}{1-z}\right) ^{\gamma }=1+2\gamma z+2\gamma ^{2}z^{2}+...\left( 0< \gamma \leq 1,\ z\in \Delta \right) \end{equation*} in Definition 2.14 of the bi-univalent function class \( M_{\sigma }(\alpha ,\phi ),\) we obtain a new class \(M_{\sigma }(\alpha ,\gamma )\) given by Definition 2.15 below.

Definition 2.15. For \( 0\leq \alpha \leq 1\) and \(0< \gamma \leq 1,\) a function \( \psi \in \sigma \) given by (5) is said to be in the class \( M_{\sigma }(\alpha ,\gamma )\) if the following subordinations hold: \begin{equation*} (1-\alpha )\frac{z\psi ^{\prime }(z)}{\psi (z)}+\alpha (1+\frac{z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)})\prec \left( \frac{1+z}{1-z} \right) ^{\gamma }\left( z\in \Delta \right) , \end{equation*} and \begin{equation*} (1-\alpha )\frac{wg^{\prime }(w)}{g(w)}+\alpha (1+\frac{wg^{\prime \prime }(w)}{g^{\prime }(w)})\prec \left( \frac{1+w}{1-w}\right) ^{\gamma }\left( w\in \Delta \right) , \end{equation*} \(g(w):=\psi ^{-1}(w).\)

Corollary 2.16. If we set \begin{equation*} \phi (z)=\frac{1+(1-2\nu )z}{1-z}=1+2(1-\nu )z+2(1-\nu )z^{2}+...\left( 0< \nu \leq 1,\ z\in \Delta \right) \end{equation*} in Definition 2.14 of the bi-univalent function class \( M_{\sigma }(\alpha ,\phi )\) we obtain a new class \(M_{\sigma }(\alpha ,\nu)\) given by Definition 2.17 below.

Definition 2.17. For \(0\leq \alpha \leq 1\) and \(0< \nu \leq 1,\) a function \( \psi \in \sigma \) given by (5) is said to be in the class \( M_{\sigma }(\alpha ,\nu )\) if the following subordinations hold: \begin{equation*} (1-\alpha )\frac{z\psi ^{\prime }(z)}{\psi (z)}+\alpha (1+\frac{z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)})\prec \frac{1+(1-2\nu )z}{1-z}% \left( z\in \Delta \right) , \end{equation*}

and \begin{equation*} (1-\alpha )\frac{w\psi ^{\prime }(w)}{\psi (w)}+\alpha (1+\frac{w\psi ^{\prime \prime }(w)}{\psi ^{\prime }(w)})\prec \frac{1+(1-2\nu )w}{1-w}% \left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\) A function in the class \(M_{\sigma }(\alpha ,\phi )\) is called bi-Mocanu-convex function of Ma-Minda type. This class unifies the classes \( S(\alpha )\) and \(C(\alpha ).\) For functions in the class \(M_{\sigma }(\alpha ,\phi ),\) the following coefficients estimates hold.

Theorem 2.18 Let \(\psi (z)\in M_{\sigma }(\alpha ,\phi )\) be of the form (5). Then

\begin{equation} \left\vert a_{1}\right\vert \leq \frac{B_{1}\sqrt{B_{1}}}{\sqrt{(1+\alpha )\left\vert B_{1}^{2}+(1+\alpha )(B_{1}-B_{2})\right\vert }}, \label{2.23} \end{equation}
(28)
and
\begin{equation} \left\vert a_{2}\right\vert \leq \frac{B_{1}}{2(1+2\alpha )}. \label{2.24} \end{equation}
(29)

Proof. If \(\psi \in M_{\sigma }(\alpha ,\phi ),\) then there exist are two Schwarz functions \(u\) and \(v\) defined by (9) and (10) respectively, such that

\begin{equation} (1-\alpha )\frac{z\psi ^{\prime }(z)}{\psi (z)}+\alpha (1+\frac{z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)})=\phi (u(z)), \label{2.25} \end{equation}
(30)
and
\begin{equation} (1-\alpha )\frac{wg^{\prime }(w)}{g(w)}+\alpha (1+\frac{wg^{\prime \prime }(w)}{g^{\prime }(w)})=\phi (v(w)). \label{2.26} \end{equation}
(31)
Since \begin{equation*} (1-\alpha )\frac{z\psi ^{\prime }(z)}{\psi (z)}+\alpha (1+\frac{z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)})=1-\left( 1+\alpha \right) a_{1}z+% \left[ \left( 1+\alpha \right) a_{1}^{2}-2\left( 1+2\alpha \right) a_{2}% \right] z^{2}+... \end{equation*} and \begin{equation*} (1-\alpha )\frac{wg^{\prime }(w)}{g(w)}+\alpha (1+\frac{wg^{\prime \prime }(w)}{g^{\prime }(w)})=1+\left( 1+\alpha \right) a_{1}w+\left[ \left( 1+\alpha \right) a_{1}^{2}+2\left( 1+2\alpha \right) a_{2}\right] w^{2}+..., \end{equation*} from (11), (12), (30) and (31), it follows that
\begin{equation} -(1+\alpha )a_{1}=\frac{1}{2}B_{1}c_{1}, \label{2.27} \end{equation}
(32)
\begin{equation} (1+\alpha )a_{1}^{2}-2(1+2\alpha )a_{2}=\frac{1}{2}B_{1}(c_{2}-\frac{% c_{1}^{2}}{2})+\frac{1}{4}B_{2}c_{1}^{2}, \label{2.28} \end{equation}
(33)
\begin{equation} (1+\alpha )a_{1}=\frac{1}{2}B_{1}b_{1}, \label{2.29} \end{equation}
(34)
and
\begin{equation} (1+\alpha )a_{1}^{2}+2(1+2\alpha )a_{2}=\frac{1}{2}B_{1}(b_{2}-\frac{% b_{1}^{2}}{2})+\frac{1}{4}B_{2}b_{1}^{2}, \label{2.30} \end{equation}
(35)
Equations (32) and (34) yields
\begin{equation} c_{1}=-b_{1}, \label{2.31} \end{equation}
(36)
and after some further calculations using (33)-(35) we find \begin{equation*} a_{1}^{2}=\frac{B_{1}^{3}(c_{2}+b_{2})}{4(1+\alpha )\left[ B_{1}^{2}+(1+\alpha )(B_{1}-B_{2})\right] }, \end{equation*} and \begin{equation*} a_{2}=\frac{B_{1}\left( b_{2}-c_{2}\right) }{8(1+2\alpha )}, \end{equation*} Applying Lemma 1.1, the estimates in (28) and (29) follow.

For \(\alpha =0,\) Theorem 2.18 gives the coefficient estimates for Ma-Minda bi-starlike functions, while for \(\alpha =1,\) it gives the following estimates for Ma-Minda bi-convex functions.

Corollary 2.19 Let \(\psi \) given by (5) be in the class \(C(\phi ).\) Then \begin{equation*} \left\vert a_{1}\right\vert \leq \frac{B_{1}\sqrt{B_{1}}}{2\left\vert B_{1}^{2}+2(B_{1}-B_{2})\right\vert },\ \ \ \text{and}\ \ \ \left\vert a_{2}\right\vert \leq \frac{B_{1}}{6}. \end{equation*}

Using the parameter setting of Definition 15 in Theorem 18 we get the following corollary.

Corollary 2.20. For \(0\leq \alpha \leq 1\) and \(0< \gamma \leq 1,\) let the function \(\psi \in M_{\sigma }(\alpha ,\gamma )\) be of the form (5). Then \begin{equation*} \left\vert a_{1}\right\vert \leq \frac{2\gamma }{\sqrt{\left( 1+\alpha \right) \left[ \left( 1+\alpha \right) +\gamma \left( 1-\alpha \right) % \right] }}\ \ \ \text{and \ \ }\left\vert a_{2}\right\vert \leq \frac{\gamma }{1+2\alpha }. \end{equation*}

Using the parameter setting of Definition 17 in Theorem 18 we get the following corollary.

Corollary 2.21. For \(0\leq \alpha \leq 1\) and \(0< \nu \leq 1,\) let the function \(\psi \in M_{\sigma }(\alpha ,\nu )\) be of the form (5). Then \begin{equation*} \left\vert a_{1}\right\vert \leq \sqrt{\frac{2\left( 1-\nu \right) }{% 1+\alpha }}\ \ \ \text{and }\left\vert a_{2}\right\vert \leq \frac{\left( 1-\nu \right) }{1+2\alpha }. \end{equation*}

Definition 2.22. A function \(\psi \in \sigma \) given by (5) is said to be in the class \(\Im _{\alpha }(\alpha ,\phi )\left( 0\leq \alpha \leq 1\right) ,\) if the following subordinations hold: \begin{equation*} \left( \frac{z\psi ^{\prime }(z)}{\psi (z)}\right) ^{\alpha }\left( 1+\frac{% z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)}\right) ^{1-\alpha }\prec \phi (z)\left( z\in \Delta \right) , \end{equation*} and \begin{equation*} \left( \frac{wg^{\prime }(w)}{g(w)}\right) ^{\alpha }\left( 1+\frac{% wg^{\prime \prime }(w}{g^{\prime }(w)}\right) ^{1-\alpha }\prec \phi (w)\left( w\in \Delta \right) , \end{equation*} \(g(w):=\psi ^{-1}(w).\) This class also reduces to classes of Ma-Minda bi-starlike and bi-convex functions. For functions in this class, the following coefficient estimates are obtained.

Theorem 2.23 Let \(\psi (z)\in \Im _{\alpha }(\alpha ,\phi )\) be of the form(5). Then

\begin{equation} \left\vert a_{1}\right\vert \leq \frac{2B_{1}\sqrt{B_{1}}}{\sqrt{\left\vert 2\left( \alpha ^{2}-3\alpha +4\right) B_{1}^{2}+4(\alpha -2)^{2}(B_{1}-B_{2})\right\vert }}, \label{2.32} \end{equation}
(37)
and
\begin{equation} \left\vert a_{2}\right\vert \leq \frac{B_{1}}{2\left\vert 3-2\alpha \right\vert }. \label{2.33} \end{equation}
(38)

Proof. Let \(\psi \in \Im _{\alpha }(\alpha ,\phi ),\) then there exist are two Schwarz functions \(u\) and \(v\) defined by (9) and (10) respectively, such that

\begin{equation} \left( \frac{z\psi ^{\prime }(z)}{\psi (z)}\right) ^{\alpha }\left( 1+\frac{% z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)}\right) ^{1-\alpha }=\phi (u(z)) \label{2.34} \end{equation}
(39)
and
\begin{equation} \left( \frac{wg^{\prime }(w)}{g(w)}\right) ^{\alpha }\left( 1+\frac{% wg^{\prime \prime }(w}{g^{\prime }(w)}\right) ^{1-\alpha }=\phi (v(w)). \label{2.35} \end{equation}
(40)
Since \begin{equation*} \left( \frac{z\psi ^{\prime }(z)}{\psi (z)}\right) ^{\alpha }\left( 1+\frac{% z\psi ^{\prime \prime }(z)}{\psi ^{\prime }(z)}\right) ^{1-\alpha }=1-\left( 2-\alpha \right) a_{1}z \end{equation*} \begin{equation*} +\left[ \frac{\alpha ^{2}-3\alpha +4}{2}a_{1}^{2}-2\left( 3-2\alpha \right) a_{2}\right] z^{2}+...\ \ . \end{equation*} Also \begin{equation*} \left( \frac{wg^{\prime }(w)}{g(w)}\right) ^{\alpha }\left( 1+\frac{% wg^{\prime \prime }(w}{g^{\prime }(w)}\right) ^{1-\alpha }=1+\left( 2-\alpha \right) a_{1}w \end{equation*} \begin{equation*} +\left[ \frac{\alpha ^{2}-3\alpha +4}{2}a_{1}^{2}+2\left( 3-2\alpha \right) a_{2}\right] w^{2}+..., \end{equation*} from (11), (12), (39) and (40), it follows that
\begin{equation} -(2-\alpha )a_{1}=\frac{1}{2}B_{1}c_{1}, \label{2.36} \end{equation}
(41)
\begin{equation} \frac{\alpha ^{2}-3\alpha +4}{2}a_{1}^{2}-2\left( 3-2\alpha \right) a_{2}=% \frac{1}{2}B_{1}(c_{2}-\frac{c_{1}^{2}}{2})+\frac{1}{4}B_{2}c_{1}^{2}, \label{2.37} \end{equation}
(42)
\begin{equation} (2-\alpha )a_{1}=\frac{1}{2}B_{1}b_{1} \label{2.38} \end{equation} and
\begin{equation} \frac{\alpha ^{2}-3\alpha +4}{2}a_{1}^{2}+2\left( 3-2\alpha \right) a_{2}=% \frac{1}{2}B_{1}(b_{2}-\frac{b_{1}^{2}}{2})+\frac{1}{4}B_{2}b_{1}^{2}. \label{2.39} \end{equation}
(43)
Equations (41) and (43) obviously yield
\begin{equation} c_{1}=-b_{1}. \label{2.40} \end{equation}
(44)
Eqs. (42)-(44) and (45) lead to \begin{equation*} a_{1}^{2}=\frac{B_{1}^{3}(c_{2}+b_{2})}{2\left( \alpha ^{2}-3\alpha +4\right) B_{1}^{2}+4(\alpha -2)^{2}(B_{1}-B_{2})}. \end{equation*} By applying Lemma 1.1, we get the desired estimate of \( \left\vert a_{1}\right\vert \) as asserted in (37). Proceeding similarly as in the earlier proof, using (42)-(45), it follows that \begin{equation*} a_{2}=\frac{B_{1}(b_{2}-c_{2})}{8(3-2\alpha )}, \end{equation*} which, in view of Lemma 1.1, yields the estimate (38).

Definition 2.24. A function \(\psi \in \sigma\) given by (5) is said to be in the class \(\beta _{\alpha }(\lambda ,\phi ),\ \lambda \geq 0,\) if the following subordinations hold: \begin{equation*} \left( 1-\lambda \right) \frac{\psi (z)}{z}+\lambda \psi ^{\prime }(z)\prec \phi (z)\left( z\in \Delta \right) , \end{equation*} and \begin{equation*} \left( 1-\lambda \right) \frac{g(w)}{w}+\lambda g^{\prime }(w)\prec \phi (w)\left( w\in \Delta \right) , \end{equation*} where \(g(w):=\psi ^{-1}(w).\)

Theorem 2.25. Let \(\psi (z)\in \beta _{\alpha }(\lambda ,\phi ),\ \lambda \geq 0\) be of the form (5). Then

\begin{equation} \left\vert a_{1}\right\vert \leq \frac{B_{1}\sqrt{B_{1}}}{\sqrt{\left\vert \left( 1+2\lambda \right) B_{1}^{2}+(1+\lambda )^{2}(B_{1}-B_{2})\right\vert }}, \label{2.41} \end{equation}
(46)
and
\begin{equation} \left\vert a_{2}\right\vert \leq \frac{B_{1}}{1+2\lambda }. \label{2.42} \end{equation}
(47)

Proof. Let \(\psi \in \beta _{\alpha }(\lambda ,\phi ),\) then there exist are two Schwarz functions \(u\) and \(v\) defined by (9) and (10) respectively, such that

\begin{equation} \left( 1-\lambda \right) \frac{\psi (z)}{z}+\lambda \psi ^{\prime }(z)=\phi (u(z)) \label{2.43} \end{equation}
(48)
and
\begin{equation} \left( 1-\lambda \right) \frac{g(w)}{w}+\lambda g^{\prime }(w)=\phi (v(w)). \label{2.44} \end{equation}
(49)
Since \begin{equation*} \left( 1-\lambda \right) \frac{\psi (z)}{z}+\lambda \psi ^{\prime }(z)=1-\left( 1+\lambda \right) a_{1}z+\left[ \left( 1+2\lambda \right) \left( a_{1}^{2}-a_{2}\right) \right] z^{2}+..., \end{equation*} and \begin{equation*} \left( 1-\lambda \right) \frac{g(w)}{w}+\lambda g^{\prime }(w)=1+\left( 1+\lambda \right) a_{1}w+\left[ \left( 1+2\lambda \right) \left( a_{1}^{2}+a_{2}\right) \right] w^{2}+..., \end{equation*} from (11), (12), (48) and (49), it follows that
\begin{equation} -(1+\lambda )a_{1}=\frac{1}{2}B_{1}c_{1}, \label{2.45} \end{equation}
(50)
\begin{equation} (1+2\lambda )(a_{1}^{2}-a_{2})=\frac{1}{2}B_{1}(c_{2}-\frac{c_{1}^{2}}{2})+% \frac{1}{4}B_{2}c_{1}^{2}, \label{2.46} \end{equation}
(51)
\begin{equation} (1+\lambda )a_{1}=\frac{1}{2}B_{1}b_{1} \label{2.47} \end{equation}
(52)
and
\begin{equation} (1+2\lambda )(a_{1}^{2}+a_{2})=\frac{1}{2}B_{1}(b_{2}-\frac{b_{1}^{2}}{2})+% \frac{1}{4}B_{2}b_{1}^{2}. \label{2.48} \end{equation}
(53)
Now (50) and (52) clearly yield
\begin{equation} c_{1}=-b_{1}. \label{2.49} \end{equation}
(54)
Equations (51), (53) and (54) lead to \begin{equation*} a_{1}^{2}=\frac{B_{1}^{3}(c_{2}+b_{2})}{4\left[ \left( 1+2\lambda \right) B_{1}^{2}+\left( 1+\lambda \right) ^{2}(B_{1}-B_{2})\right] }, \end{equation*} By applying Lemma 1.1, we get the desired estimate of \(\left\vert a_{1}\right\vert\) as asserted in (46). Proceeding similarly as in the earlier proof, using (51)-(54), it follows that \begin{equation*} a_{2}=\frac{B_{1}(b_{2}-c_{2})}{4(1+2\lambda )}, \end{equation*} which, in view of Lemma 1.1, yields the estimate (47).

Competing Interests

The authors declares that he has no competing interests.

References

  1. Miller, S. S., & Mocanu, P. T. (2000). Differential subordinations: Theory and Applications, Series on Monographs and Textbooks in Pure and Appl. Math. No. 225 Marcel Dekker. Inc., New York.[Google Scholor]
  2. Bulboaca, T. (2005). Differential Subordinations and Superordinations, Recent Results, House of Scientific Book Publishers, ClujNapoca. Romania. [Google Scholor]
  3. Darwish1, H.E., Lashin A.Y., Soileh, S.M. (2013). An application of multiplier transformation for certain subclasses of meromorphic p-valent functions. Int. J. Pure. Appl. Math., 85(2), 415-433.
  4. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2013). Certain Subclass of Meromorphic p-Valent Functions With Alternating Coefficients. Int. J. Basic. Appl. Sci., 13(2), 108-119. [Google Scholor]
  5. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2014). On a certain subclass of analytic functions defined by a generalized differential operator and multiplier transformation, J. Frac. Cal. Appl., 5(2), 16-27.
  6. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2015). Some subordination and Superordination results with an integral operator, Le Matematiche, 70(1), 39-51. [Google Scholor]
  7. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2015). Differential subordinations and superordinations of certain meromorphic functions associated with an integral operator, Kyungpook Math. J., 55(3), 625-639.
  8. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2016). Fekete-Szegö type coefficient inequalities for certain subclasses of analytic functions involving Salagean operator, Punjab Univ. J. Math., 48(2), 65-80.
  9. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2016). On certain subclasses of starlike \(p-\)valent functions, Kyungpook Math. J., 56(3), 867-876.
  10. Darwish, H. E., Lashin, A. Y., & Soileh, S. M. (2017). Some properties for \(\alpha\)-starlike functions with respect to \(k\)-symmetric points of complex order, Ann. Univ. Marie Curie-Sklodowska, Sect. A., 71(1), 1-9.
  11. Darwish, H. E., Lashin, A. Y., \& Soileh, S. M. (2018). Convolution properties for certain subclasses of meromorphic bounded functions, \emph{J. Comput. Anal. Appl.}, 2(24), 258-265. [Google Scholor]
  12. Darwish, H. E., Lashin, A. Y., \& Soileh, S. M. (2018). Some differential subordination and superordination results associated with the generalized Bessel functions, J. Mod. Sci. Engin., 2(2), 25-35.
  13. Duren, P. L. (1983). Univalent functions, A Series of comprehensive studies in mathematics, Vol. 259. Grundlehren der Mathematischen Wissenschaften. [Google Scholor]
  14. Ma, W., & Minda, D. (1994). A unified treatment of some special classes of univalent functions. In Proceeding of the Conference on Complex Analysis, Z. Li, F. Ren, L. Yang and S. Zhang (Eds), Int. Press (pp. 157-169).[Google Scholor]
  15. Deniz, E. (2013). Certain subclasses of bi-univalent functions satisfying subordinate conditions. J. Class. Anal, 2(1), 49-60.[Google Scholor]
  16. [Google Scholor]
  17. Brannan, D. A., & Clunie, J. G. (1980). Aspects of Contemporary Complex Analysis (Proceedings of the NATO Advanced Study Institute held at the University of Durham, Durham; July 1–20, 1979). [Google Scholor]
  18. Netanyahu, E. (1969). The minimal distance of the image boundary from the origin and the second coefficient of a univalent function in \(\left\vert z\right\vert <1\). Archive for Rational Mechanics and Analysis, 32(2), 100-112. [Google Scholor]
  19. Brannan, D. A., & Taha, T. S. (1988). On some classes of bi-univalent functions. In Mathematical Analysis and Its Applications, 31(2), 70-77. [Google Scholor]
  20. Taha, T. S. (1981). Topics in univalent function theory. University of London, Phd thesis.[Google Scholor]
  21. Ali, R. M., Lee, S. K., Ravichandran, V., & Supramaniam, S. (2012). Coefficient estimates for bi-univalent Ma-Minda starlike and convex functions. Applied Mathematics Letters, 25(3), 344-351. [Google Scholor]
  22. Aouf, M. K., El-Ashwah, R. M., & Abd-Eltawab, A. M. (2013). New subclasses of biunivalent functions involving Dziok-Srivastava operator. International Scholarly Research Notices, 2013, Article ID 387178. [Google Scholor]
  23. Bulut, S. (2013). Coefficient estimates for a class of analytic and bi-univalent functions. Novi Sad J. Math, 43(2), 59-65. [Google Scholor]
  24. El-Ashwah, R. M. (2014). Subclasses of bi-univalent functions defined by convolution. Journal of the Egyptian Mathematical Society, 22(3), 348-351. [Google Scholor]
  25. Frasin, B. A., & Aouf, M. K. (2011). New subclasses of bi-univalent functions. Applied Mathematics Letters, 24(9), 1569-1573.[Google Scholor]
  26. Lashin, A. Y. (2016). On certain subclasses of analytic and bi-univalent functions. Journal of the Egyptian Mathematical Society, 24(2), 220-225.[Google Scholor]
  27. Magesh, N., & Yamini, J. (2013). Coefficient bounds for certain subclasses of bi-univalent functions. In International Mathematical Forum 8, 1337-1344. [Google Scholor]
  28. Murugusundaramoorthy, G., Magesh, N., & Prameela, V. (2013). Coefficient bounds for certain subclasses of bi-univalent function. In Abstract and Applied Analysis (Vol. 2013), Article ID 573017. [Google Scholor]
  29. Porwal, S., & Darus, M. (2013). On a new subclass of bi-univalent functions. Journal of the Egyptian Mathematical Society, 21(3), 190-193.[Google Scholor]
  30. Srivastava, H. M., & Bansal, D. (2015). Coefficient estimates for a subclass of analytic and bi-univalent functions. Porwal, S., & Darus, M. (2013). On a new subclass of bi-univalent functions. Journal of the Egyptian Mathematical Society, 21(3), 190-193., 23, 242-246. [Google Scholor]
  31. Srivastava, H. M., Murugusundaramoorthy, G., & Magesh, N. (2013). Certain subclasses of bi-univalent functions associated with the Hohlov operator. Global J. Math. Anal., 1(2), 67-73. [Google Scholor]
  32. Xu, Q. H., Gui, Y. C., & Srivastava, H. M. (2012). Coefficient estimates for a certain subclass of analytic and bi-univalent functions. Applied Mathematics Letters, 25(6), 990-994. [Google Scholor]
  33. Xu, Q. H., Xiao, H. G., & Srivastava, H. M. (2012). A certain general subclass of analytic and bi-univalent functions and associated coefficient estimate problems. Applied Mathematics and Computation, 218(23), 11461-11465. [Google Scholor]
  34. Mitrinović, D. S. (1979). On the univalence of rational functions. Publikacije Elektrotehničkog fakulteta. Serija Matematika i fizika, (634/677), 221-227. [Google Scholor]
  35. Reade, M. O., Silverman, H., & Todorov, P. G. (1984). On the starlikeness and convexity of a class of analytic functions. Rendiconti del Circolo Matematico di Palermo, 33(2), 265-272. [Google Scholor]
  36. Obradović, M., Ponnusamy, S., Singh, V., & Vasundhra, P. (2002). Univalency, starlikeness and convexity applied to certain classes of rational functions. Analysis, 22(3), 225-242. [Google Scholor]
  37. Pommerenke, C. (1975). Univalent functions. Vandenhoeck and Ruprecht.
]]>
Asymptotic stability and blow-up of solutions for the generalized boussinesq equation with nonlinear boundary condition https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/asymptotic-stability-and-blow-up-of-solutions-for-the-generalized-boussinesq-equation-with-nonlinear-boundary-condition/ Thu, 13 Dec 2018 20:12:07 +0000 https://old.pisrt.org/?p=1630
OMA-Vol. 2 (2018), Issue 2, pp. 93–113 | Open Access Full-Text PDF
Jian Dang, Qingying Hu, Hongwei Zhang
Abstract:In this paper, we consider initial boundary value problem of the generalized Boussinesq equation with nonlinear interior source and boundary absorptive terms. We establish both the existence of the solution and a general decay of the energy functions under some restrictions on the initial data. We also prove a blow-up result for solutions with positive and negative initial energy respectively.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Asymptotic stability and blow-up of solutions for the generalized boussinesq equation with nonlinear Boundary Condition

Jian Dang, Qingying Hu, Hongwei Zhang\(^1\)
Department of Mathematics, Henan University of Technology, Zhengzhou 450001, China.; (J.S & Q.H & H.Z)
\(^{1}\)Corresponding Author; whz661@163.com

Copyright © 2018 Jian Dang, Qingying Hu, Hongwei Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we consider initial boundary value problem of the generalized Boussinesq equation with nonlinear interior source and boundary absorptive terms. We establish both the existence of the solution and a general decay of the energy functions under some restrictions on the initial data. We also prove a blow-up result for solutions with positive and negative initial energy respectively.

Keywords:

Generalized Boussinesq equation; Nonlinear boundary condition; Global existence; Blow-up; Decay.

1. Introduction

In this paper, we consider the following initial boundary value problem for the generalized Boussinesq equation with a nonlinear Neumann condition

\begin{equation} \left \{\begin{array}{rl} &u_{t}- \Delta u_t- \Delta u +|u|^{q-2}u_t = f(u), \\ &u=0,x \in \Gamma_0,\\ &\frac{\partial u}{\partial \nu }+g(u)=0, x \in \Gamma_1,\\ &u(x, 0) = u_0(x), x \in \Omega, \end{array} \right. \label{1.1} \end{equation}
(1)
where \(u=u(t,x)(t\ge 0,x \in \Omega\)), \(\Delta\) denotes the Laplacian operator with respect to the \(x\) variable, \(\Omega\) is a bounded open subset of \(R^n(n\ge 1)\) of class \(C^1\), \(\partial \Omega=\Gamma_0\cup \Gamma_1, mes(\Gamma_0)>0\), \(\Gamma_0\cap \Gamma_1=\emptyset\), and \(\frac{\partial }{\partial \nu }\) denotes the unit outer normal derivative, \(q>2\) is a positive constant, and the initial datum \(u_0\) is a given function with the compatibility boundary condition \(u_0=0\) on \(\Gamma_0\) and \(f(s)\) and \(g(s)\) are continuous functions. For sake of simplicity , in this paper, we consider \(f(s)=a|u|^{p-1}u, g(s)= b|u|^{k-1}u\), where \(p>1,k>1\) and \(a=b=1\).

Problem (1) was derived in [ 1]. This problem describes an electric breakdown in crystalline semiconductors with allowance for the linear dissipation of bound- and free-charge sources [ 1, 2, 3], where the nonlinear Neumman boundary condition on the boundary of the semiconductor was introduced. According to the authors' knowledge, there are few works on the study of problem (1). Korpusov and Sveshnikov [ 4] and Makarov [ 5] proved a local theorem on the existence of solutions to the following problem

\begin{equation} \left \{\begin{array}{rl} &u_{t}- \Delta u_t- \Delta u +(|u|^{q_3}u)_t =|u|^{q_2}u , \\ &\frac{\partial u}{\partial \nu }+|u|^{q_1}u=0, x \in \partial \Omega=\Gamma,\\ &u(x, 0) = u_0(x), x \in \Omega \end{array} \right. \label{1.2} \end{equation}
(2)
by using the Galerkin method combined with the compactness method. By using the method of energy inequalities [6, 7], they also obtained sufficient conditions for the blow-up of solutions in a finite time interval and established upper and lower bounds for the blow-up time, provided the initial data satisfies \begin{eqnarray}&& \int_{\Omega}[\frac{1}{q_2+2}| u_0 |^{q_2+2}-\frac{1}{2}|\nabla u_0 |^{2}]dx-\frac{1}{q_1+2}\int_{\Gamma}| u_0 |^{q_1+2}dx \nonumber\\ &&\ge c_1\{\int_{\Omega}[\frac{1}{2}|\nabla u_0 |^{2}+\frac{q_3+1}{q_3+2}| u_0 |^{q_3+2}]dx+\frac{q_1+1}{q_1+2}\int_{\Gamma}| u_0 |^{q_1+2}dx\}, \nonumber \end{eqnarray} where \(c_1\) is a positive constant depending on \(q_1,q_2,q_3\).

In this paper, we consider both the existence of the solution and a general decay of the energy functions under some restrictions on the initial data. We also study blow-up condition of the solutions with positive and negative initial energy respectively.
Before we state and prove our results, let us recall some works related to the problem we address.
In the absence of the nonlinear diffusion term \(|u|^{q-2}u_t\) and \(g(u)=0\), problem (1) can be reduced to the following classical problem
\begin{equation} \left \{\begin{array}{rl} &u_{t}- \Delta u_t- \Delta u= f(u), \\ &\frac{\partial u}{\partial \nu }=0,~or~u=0,~x \in \partial\Omega,\\ &u(x, 0) = u_0(x), x \in \Omega, \end{array} \right. \label{1.3} \end{equation}
(3)
The first equation in probem(3) can be called Sobolev type equation, Sobolev-Galpern type equation, pseudo-parabolic equation, or the Benjamin Bona Mahony Burgers' (BBM-Burgers) equation (for example, see [1, 3, 8, 9]. It also appears as a nonclassical diffusion equation in fluid mechanics, solid mechanics and heat conduction theory, for instance, see [10] and references therein. It's well known that problem (3) has been studied by many authors. A powerful technique to treat problem (3) is the so called "potential well method", which was established by Payne and Sattinger [11] and Sattinger [12], and then improved by Liu and Zhao [13] by introducing a family of potential wells. Recently, there are some interesting results about the global existence and blow-up of solutions for problem (3) with \(f(u)=u^p\) in [14] where a family of potential wells is introduced to prove global existence, nonexistence and asymptotic behavior of solutions with low initial energy, while for high initial energy, finite time blow-up of solutions is acquired by comparison principle. For other related works, we refer the readers to [1, 2, 3, 6,, 7,, 8, 10, 15, 16, 17, 18, 1920, 21, 22, 23, 24] and the references therein. The obtained results show that global existence and nonexistence depend roughly on \(p\), the degree of nonlinearity in \(f\), the dimension \(n\), and the size of the initial data.
The equation in problem (1) with Dirichlet boundary condition (i.e. \(g(u)=0\)) has also been studied by many authors [1, 2, 3, 16, 25, 26, 27, 28 ]. Korpusov and Sveshnikov et al [1, 2, 3, 16, 25, 26] gave the local strong solution and sufficient close-to-necessary conditions for the blow-up of solutions with negative initial energy using the energy approach developed by Levine [ 6]. Furthermore, they also considered two different abstract Cauchy problems for equations of Sobolev type. Zhang et al [ 27, 28] showed the exponential growth and blow-up of solutions with negative or positive initial energy by constructing differential inequality. We also refer to [ 29, 20, 31, 32, 33, 34, 35] for related results.
For the following parabolic equation with a nonlinear boundary condition or dynamic boundary condition
\begin{equation} \left \{\begin{array}{rl} &u_{t}- \Delta u= f(u), \\ &u=0,x \in \Gamma_0,\\ &\frac{\partial u}{\partial \nu }=-Q(u_t)+g(u), x \in \Gamma_1,\\ &u(x, 0) = u_0(x), x \in \Omega, \end{array} \right. \label{1.4} \end{equation}
(4)
local well-posedness, global existence and blow-up results for the solutions have also been widely studied. For example, Levine and Smith [36] and Vitillaro [37, 38] studied local and global existence and nonexistence of the solutions to problem (4) by potential well theory. Also, we would like to mention the classical global existence and nonexistence results in [39, 40, 41, 42]. For problem (4) with \(Q=0\), as that in [43], if we interpret \(u\) as a heat distribution in the body \(\Omega\), and assume that \(u\ge 0\) for the moment, noting that for ranges in which \(-f\) is positive we have "absorption" of heat, while when \(-f\) is negative we have "sources" of heat. The same holds for \(-g\): when \(-g\) is positive we have a flow of heat through the boundary of \(\Omega\) that extracts heat from the body, while in the opposite case, heat is flowing inside \(\Omega\). Then, for problem (1) with \(f(s)=|u|^{p-1}u, g(s)= |u|^{k-1}u\), \(f\) can be called "sources" term and \(g\) can be called "boundary absorptive" term. When term \(|u|^{q-2}u_t\) does not present in problem (1), the same boundary condition arises in the literature in connection with the wave equation, i.e. when the operator \(u_t-\Delta u\) in (1) is replaced by the wave operator \(u_{tt}-\Delta u\). Some related problems concerning wave equations with nonlinear damping and source terms have been considered in [44, 45, 46, 47, 48, 49, 50, 51, 52, 53]. In particular, Cavalcanti et al. [44] deals with the problem
\begin{equation} \left \{\begin{array}{rl} &u_{tt}- \Delta u= f(u), \\ &\frac{\partial u}{\partial \nu }+u=-h(u_t)+g(u), x \in \partial \Omega,\\ &u(x, 0) = u_0(x),u_t(0)=u_1, x \in \Omega, \end{array} \right. \label{1.5} \end{equation}
(5)

where under some assumptions imposed on the damping and source terms, they showed the well-posedness of the problem and effective optimal decay rates for the solutions. They also established a blow-up result in the case where the boundary source dominates the boundary damping and initial data are large enough. In general, methods employed to study hyperbolic problems cannot be employed to study parabolic problems, and conversely. Nevertheless, the arguments of [44] can be conveniently adapted to problem (1) without \(|u|^{q-2}u_t\). However, there are several important differences in the proofs, which make the adaptation non-trivial. The first essential difference, with respect to [44], comes out here, since the boundary source term appearing in (5) is now a boundary absorptive term. When one combines boundary absorption and interior source terms with initial data of arbitrary size, the analysis becomes more difficult. Moreover, terms \(- \Delta u_t\) and \(|u|^{q-2}u_t\) differ from boundary damping term \(Q(u_t)\) given in [ 44].

In this paper, we will investigate the existence and nonexistence of global solutions to problem (1). More precisely, under appropriate assumptions imposed on the source and boundary absorption terms, we shall establish global existence of solutions by using the potential well method combined with a standard continuous argument. We will give sufficient conditions for the blow-up of solutions in a finite time interval under suitable initial data using differential inequality. It is different with the results in [ 4, 5]. We also give a general decay of the energy by an integral inequality in [ 54].

This paper is organized as follows. Section 2 is concerned with some notations and statement of assumptions. In Section 3, we prove global existence of solutions and the blow-up result for the solutions with positive and negative initial energy respectively. In Section 4, a general decay of the energy is proved.

2. Preliminaries

In this section, we present some materials needed in the proof of our results. We use the standard Lebesgue space \(L^p(\Omega)(1< p< \infty)\) and Soblev space \(H^1(\Omega)\) with their usual scalar products and norms. Moreover, we denote \(||u||_{L^{p}(\Omega)}= || u ||_{p}\) and \(||u||_{L^{p}(\Gamma_1)}= || u ||_{p,\Gamma_1}\) for \(1\le p \le \infty\), and the Hilbert space \(H^1_{\Gamma_0}(\Omega):= \{ u \in H^1(\Omega): u_{|\Gamma_0}=0\}\), \(||u||^2_{H^1_{\Gamma_0}}=||\nabla u ||^2_2+|| u ||^2_2\), where \(u_{|\Gamma_0}\) stands for the restriction of the trace of \(u\) on \(\partial \Omega\) to \(\Gamma_0\), and in particular, we denote \(||u||_2= || u ||\) and \(||u||_{2,\Gamma_1}= || u ||_{\Gamma_1}\). Since \(meas(\Gamma_0)>0\), a Poincare-type inequality holds and consequently \(||\nabla u ||\) is an equivalent norm in \(H^1_{\Gamma_0}(\Omega)\). The constants \(C\) used throughout this paper are positive generic constants, which may be various in different occurrences.

We assume that
\begin{align} 1< p\le \frac{n+2}{n-2}, 1< q \le \frac{n}{n-2}~if~ n\ge 3;\nonumber \\ p> 1,q> 1~ if~ n=1,2;p>max\{q-1,k\}>1.\label{2.1} \end{align}
(6)
Then, we have the Soblev embedding \(H^1_{\Gamma_0}(\Omega)\hookrightarrow L^{p+1}(\Omega)\) and the trace-Soblev embedding \(H^1_{\Gamma_0}(\Omega)\hookrightarrow L^{k+1}(\Gamma_1)\). In these cases, the embedding constants denote \(c_*,B_*\) respectively, i.e.
\begin{eqnarray} &&|| u ||_{p+1}\le c_*||u||_{H^1_{\Gamma_0}(\Omega)},||u||_{k+1,\Gamma_1}\le B_*||u||_{H^1_{\Gamma_0}(\Omega)}.\label{2.2} \end{eqnarray}
(7)
A function \(u(x,t)\) of class \(H^{1}(0,T;H^1_{\Gamma_0}(\Omega))\) is called a weak generalized solution of problem (1) if it satisfies the equation \begin{eqnarray}&& (u_{t},\phi)+(\nabla u_{t},\nabla \phi) + (\nabla u,\nabla \phi)+\int_{\Omega}|u|^{q-2}u_t\phi dx-\int_{\Omega}|u|^{p-1}u\phi dx\nonumber\\ &&+\int_{\Gamma_1}|u|^{k-1}u\phi dx+k\int_{\Gamma_1}|u|^{k-1}u_t\phi dx=0\nonumber \end{eqnarray} for any \(\phi \in H^1_{\Gamma_0}(\Omega)\), and almost all \(t\in [0,T]\) and the initial condition \(u(x,0)=u_0(x)\) (see [ 4, 5]).

Theorem 2.1. Let \(u_{0}\in H^{1}(0,T;H^1_{\Gamma_0}(\Omega))\) and \(p,q,k\) satisfy (6), then problem (1) has a unique weak generalized solution on \([0,T_0)\) for some \(T_0>0\), and we have either \(T_0=+\infty\) or \(T_0< +\infty\) and $$\lim\limits_{t \rightarrow T_0^-}sup||u||^2_{H^1_{\Gamma_0}(\Omega)}=+\infty.$$

Theorem 2.1 can be easily established by combining the argument of [ 55], Theorem 1 and Theorem 2 in [ 4, 5] , thus we omit it.
We define the functional that plays as the "potential energy"
\begin{eqnarray} &&E(t)=E(u)=\frac{1}{2}||\nabla u ||^{2}-\frac{1}{p+1}||u ||^{p+1}_{p+1} +\frac{1}{k+1}||u||^{k+1}_{k+1,\Gamma_1} \nonumber\\ &&=\frac{1}{2}||u||^2_{H^1_{\Gamma_0}(\Omega)}-\frac{1}{p+1}||u ||^{p+1}_{p+1} +\frac{1}{k+1}||u||^{k+1}_{k+1,\Gamma_1},\label{2.3} \end{eqnarray}
(8)
and the Nehari functional \begin{eqnarray} &&I(u)=||u||^2_{H^1_{\Gamma_0}(\Omega)}-||u ||^{p+1}_{p+1} +||u||^{k+1}_{k+1,\Gamma_1}.\nonumber \end{eqnarray} We also have the following identy
\begin{eqnarray} &E'(t)=-\frac{1}{2}|| u_{t} ||^{2}_{H^1_{\Gamma_0}(\Omega)}-\int_{\Omega}|u|^{q-2}u_t^2 dx-k\int_{\Gamma_1}|u|^{k-1}u_t^2 dx \le 0.\label{2.4} \end{eqnarray}
(9)

In the sequel, a crucial role is played by the Nehari manifold to \(I\), which is $$N=\{u\in H^1_{\Gamma_0}(\Omega)| I(u)=0, ||u||_{H^1_{\Gamma_0}(\Omega)}\neq 0 \},$$ and we can readily give the mountain-pass level \(d\) by \(d=\inf \limits_{u\in N}E(u)\).

Next, we show some properties related to functions \(E(u)\) and \(I(u)\) in the following lemmas.

Lemma 2.2. Let \(u \in H^1_{\Gamma_0}(\Omega)\), \(||u||_{H^1_{\Gamma_0}(\Omega)}\neq 0\) and (6) hold, then
(i)\(\lim \limits_{\lambda \rightarrow 0}E(\lambda u)=0\), \(\lim \limits_{\lambda \rightarrow + \infty}E(\lambda u)=- \infty\);
(ii) In the interval \(0 < \lambda < \infty\), there exists a unique \(\lambda_0=\lambda_0(u)>0\) such that \(\frac{d}{d \lambda}E(\lambda u)|_{\lambda=\lambda_0}=0\);
(iii) \(E(\lambda u)\) is increasing on \(0 < \lambda \le \lambda_0\), decreasing on \(\lambda_0 \le \lambda < +\infty\) and takes the maximum at \(\lambda =\lambda_0\);
(iv) \(I(\lambda u)>0\), for \(0< \lambda < \lambda_0\); \(I(\lambda u)< 0\) , for \(\lambda >\lambda_0\) and \(I(\lambda_0u)=0\).

Proof. (i) The conclusion follows from \begin{eqnarray} &&E(\lambda u)=\frac{\lambda^2}{2}||u||^2_{H^1_{\Gamma_0}(\Omega)}-\frac{\lambda^{p+1}}{p+1}||u ||^{p+1}_{p+1} +\frac{\lambda^{k+1}}{k+1}||u||^{k+1}_{k+1,\Gamma_1}.\nonumber \end{eqnarray} (ii) First, note that \begin{align} &\frac{d}{d \lambda}E(\lambda u)=\lambda ||u||^2_{H^1_{\Gamma_0}(\Omega)}-\lambda^{p}||u ||^{p+1}_{p+1} +\lambda^{k}||u||^{k+1}_{k+1,\Gamma_1}=0, \lambda >0\nonumber \end{align} is equivalent to

\begin{align} &\lambda^{p-1}||u ||^{p+1}_{p+1} -\lambda^{k-1}||u||^{k+1}_{k+1,\Gamma_1}=||u||^2_{H^1_{\Gamma_0}(\Omega)}.\label{2.5} \end{align}
(10)
Let \begin{eqnarray*} h(\lambda)&=&\lambda^{p-1}||u ||^{p+1}_{p+1} -\lambda^{k-1}||u||^{k+1}_{k+1,\Gamma_1}\\ &=&\lambda^{k-1}(\lambda^{p-k}||u ||^{p+1}_{p+1} -||u||^{k+1}_{k+1,\Gamma_1})\\ &=&\lambda^{k-1}h_1(\lambda), \end{eqnarray*}

where \(h_1(\lambda)=\lambda^{p-k}||u ||^{p+1}_{p+1} -||u||^{k+1}_{k+1,\Gamma_1}\). Note that \(h_1(\lambda)\) is increasing on \(0 < \lambda < \infty\), \(\lim \limits_{\lambda \rightarrow 0^{+}}h_1(\lambda )\le 0\), and \(\lim \limits_{\lambda \rightarrow + \infty}h_1(\lambda )=+\infty\), and hence there exists a unique \(\lambda^{*} > 0\) such that \(h_1(\lambda^{*})=0\), thereby \(h(\lambda^{*})=0\), \(h(\lambda)< 0 \) for \(0 < \lambda < \lambda^{*}\), \(h(\lambda)>0\) for \(\lambda^{*}< \lambda < \infty\). Hence, for any \(||u||_{H^1_{\Gamma_0}(\Omega)}> 0\), there exists a unique \(\lambda_0 > \lambda^{*}\) such that (10) holds, and then (ii) holds.

(iii) Note that \(\frac{d}{d \lambda}E(\lambda u)=\lambda (||u||^2_{H^1_{\Gamma_0}(\Omega)}-h(\lambda))\). From the proof of (ii), it follows that if \(0 < \lambda < \lambda^{*}\), then \(h(\lambda)< 0\); if \(\lambda^{*} < \lambda < \lambda_0\), then \(0< h(\lambda)< ||u||^2_{H^1_{\Gamma_0}(\Omega)}\); and if \(\lambda_0< \lambda < \infty\), then \(h(\lambda)> ||u||^2_{H^1_{\Gamma_0}(\Omega)}\). From this, the conclusion of (iii) holds.
(iv)The conclusion follows from the proof of (iii) and \begin{align} &I(\lambda u)=\lambda^2||u||^2_{H^1_{\Gamma_0}(\Omega)}-\lambda^{p+1}||u ||^{p+1}_{p+1} +\lambda^{k+1}||u||^{k+1}_{k+1,\Gamma_1}=\lambda \frac{d}{d \lambda}E(\lambda u).\nonumber \end{align} This completes the proof of Lemma 2.2.

Now, we define \begin{align} &F(x)=\frac{1}{2}x^2-\frac{c_{*}^{p+1}}{p+1}x^{p+1} -\frac{B_{*}^{k+1}}{k+1}x^{k+1},\nonumber \end{align} and let \(r_0\) be the unique real root of equation \(F'(x)=0\). We easily verify that \(r_0\) is the unique real root of equation \(\phi(x)=1\), where \(\phi(x)=c_{*}^{p+1}x^{p-1} +B_{*}^{k+1}x^{k-1}\), then \(\phi(r_0)=c_{*}^{p+1}r_0^{p-1} +B_{*}^{k+1}r_0^{k-1}=1\). It can be checked that \(r_0\) is a point of local maximum for \(F(x)\) (see [44] for more details). Accordingly, let us define $E_1$ as \begin{align} &E_1=F(r_0)=\frac{1}{2}r_0^2-\frac{c_{*}^{p+1}}{p+1}r_0^{p+1} -\frac{B_{*}^{k+1}}{k+1}r_0^{k+1}.\nonumber \end{align}

Lemma 2.3. Let (6) hold, then
(i) if \(0 < ||u||_{H^1_{\Gamma_0}(\Omega)} < r_0 \), then \(I(u)>0\); (ii)if \(I(u)< 0\), then \(||u||_{H^1_{\Gamma_0}(\Omega)}>r_0\); (iii) if \(I(u)=0\) and \(||u||_{H^1_{\Gamma_0}(\Omega)}\ne 0\), i.e. \(u \in N\), then \(||u||_{H^1_{\Gamma_0}(\Omega)}\ge r_0\).

Proof. (i)Since \(\phi(x)\) is a strictly increasing function in \((0, r_0)\), from $$0 < ||u||_{H^1_{\Gamma_0}(\Omega)}< r_0,$$ we get \(\phi(||u||_{H^1_{\Gamma_0}(\Omega)})< \phi(r_0)\) and \begin{eqnarray} &&I(u)=||u||^2_{H^1_{\Gamma_0}(\Omega)}-||u ||^{p+1}_{p+1} +||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\ge||u||^2_{H^1_{\Gamma_0}(\Omega)}-||u ||^{p+1}_{p+1}-||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&= ||u||^2_{H^1_{\Gamma_0}(\Omega)}(1-c_{*}^{p+1}||u||_{H^1_{\Gamma_0}(\Omega)}^{p-1}-B_{*}^{k+1}||u||_{H^1_{\Gamma_0}(\Omega)}^{k-1})\nonumber\\ &&=||u||^2_{H^1_{\Gamma_0}(\Omega)}(\phi(r_0)-\phi(||u||_{H^1_{\Gamma_0}(\Omega)}))>0.\nonumber \end{eqnarray} (ii) Condition \(I(u)< 0\) gives \begin{eqnarray} &&\phi(r_0)||u||^2_{H^1_{\Gamma_0}(\Omega)}=||u||^2_{H^1_{\Gamma_0}(\Omega)}\nonumber\\ &&<||u ||^{p+1}_{p+1} -||u||^{k+1}_{k+1,\Gamma_1}< ||u ||^{p+1}_{p+1} +||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\le ( c_{*}^{p+1}||u||_{H^1_{\Gamma_0}(\Omega)}^{p-1} +B_{*}^{k+1}||u||_{H^1_{\Gamma_0}(\Omega)}^{k-1})||u||^2_{H^1_{\Gamma_0}(\Omega)}=\phi(||u||_{H^1_{\Gamma_0}(\Omega)})||u||^2_{H^1_{\Gamma_0}(\Omega)},\nonumber \end{eqnarray} which implies \(||u||_{H^1_{\Gamma_0}(\Omega)}\ne 0\) and \(||u||_{H^1_{\Gamma_0}(\Omega)}>r_0\) by the monotonicity of \(\phi\).
(iii) If \(I(u)=0\) and \(||u||_{H^1_{\Gamma_0}(\Omega)}\ne 0\), then \begin{eqnarray} &&\phi(r_0)||u||^2_{H^1_{\Gamma_0}(\Omega)}=||u||^2_{H^1_{\Gamma_0}(\Omega)}=||u ||^{p+1}_{p+1} -||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\le ||u ||^{p+1}_{p+1} +||u||^{k+1}_{k+1,\Gamma_1}\le\phi(||u||_{H^1_{\Gamma_0}(\Omega)})||u||^2_{H^1_{\Gamma_0}(\Omega)},\nonumber \end{eqnarray} and from the monotonicity of \(\phi\), we get \(||u||_{H^1_{\Gamma_0}(\Omega)}>r_0\).

Lemma 2.4. \(d \ge d_0=(\frac{1}{2}-\frac{1}{p+1})r_0^2= \frac{p-1}{2(p+1)}r_0^2\).

Proof. For \(u \in N\) (or \(I(u)=0\) and \(||u||_{H^1_{\Gamma_0}(\Omega)}\ne 0\)), by Lemma 2.3, we have \(||u||_{H^1_{\Gamma_0}(\Omega)}>r_0\). Hence \begin{eqnarray} &&E(u)\ge \frac{1}{2}||u||^2_{H^1_{\Gamma_0}(\Omega)}+\frac{1}{p+1}(-||u ||^{p+1}_{p+1}+||u||^{k+1}_{k+1,\Gamma_1})\nonumber\\ &&=(\frac{1}{2}-\frac{1}{p+1})||u||^2_{H^1_{\Gamma_0}(\Omega)}+\frac{1}{p+1}I(u)\nonumber\\ &&=(\frac{1}{2}-\frac{1}{p+1})||u||^2_{H^1_{\Gamma_0}(\Omega)}\ge(\frac{1}{2}-\frac{1}{p+1})\lambda_0^2,\nonumber \end{eqnarray} which gives \(d\ge d_0\).

Remark 2.5. Noting the definition of \(d\) and the fact that

\begin{align} &E(u)=\frac{1}{2}||u||^2_{H^1_{\Gamma_0}(\Omega)}-\frac{1}{p+1}||u ||^{p+1}_{p+1} +\frac{1}{k+1}||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &\ge \frac{1}{2}||u||^2_{H^1_{\Gamma_0}(\Omega)}-\frac{c_{*}^{p+1}}{p+1}||u||_{H^1_{\Gamma_0}(\Omega)}^{p+1} -\frac{B_{*}^{k+1}}{k+1}||u||_{H^1_{\Gamma_0}(\Omega)}^{k+1}=F(||u||_{H^1_{\Gamma_0}(\Omega)}),\label{2.6} \end{align}
(11)
we know \(d\ge E_1\).

Now we define the subsets of \(H^1_{\Gamma_0}(\Omega)\) related to problem (1)-(3). Set
\begin{align} W=\{u \in H^1_{\Gamma_0}(\Omega)|~E(u)< d,I(u)>0\}, V=\{u \in H^1_{\Gamma_0}(\Omega)|~E(u)< d,I(u)< 0\}. \label{2.7} \end{align}
(12)

Lemma 2.6. If \(u_0 \in H^1_{\Gamma_0}(\Omega)\), \(0< E(0)< d\), and \(u\) is a weak solution of problem (1)-(3), then (i) \(u \in W\) if \(I(u_0)>0\) or \(||u||_{H^1_{\Gamma_0}(\Omega)}= 0\); (ii) \(u \in V\) if \(I(u_0)< 0\).

Proof. We only prove (i), and the proof for (ii) is similar. We are going to prove that \(u \in W\) for \(0< t< T_0\). From (9), we have \begin{eqnarray*} && E(u(t))+\int_0^t[|| u_{t} ||^{2}_{H^1_{\Gamma_0}(\Omega)}+\int_{\Omega}|u|^{q-2}u_t^2 dx+k\int_{\Gamma_1}|u|^{k-1}u_t^2 dx]ds \\ && =E(0)< d,~ for~ any ~ t \in [0,T_0), \nonumber \end{eqnarray*} which implies \(E(u(t))< d\). To prove that \(u \in W\) for \(0< t< T_0 \), we argue by contradiction. Indeed, if it is not the case, there would exist \(t_0 \in (0, T_0)\) such that \(u(t_0) \in N\), and by the definition of \(d=\inf \limits_{u\in N}E(u)\), one has \(d< E(t_0)\le d\), then we reach to a contradiction.

3. Global existence and blow-up of solutions

In this section, we prove the global existence and blow-up of solutions to problem (1).

Theorem 3.1. Let \(u_0 \in H^1_{\Gamma_0}(\Omega)\), \(0< E(0)< d\), \(I(u_0)>0\) or \(||u||_{H^1_{\Gamma_0}(\Omega)}= 0\), and \(p,q,k\) satisfies (6), then the weak solution \(u\) to problem (1) in Theorem 2.1 can be extended to \((0,\infty)\).

Proof. By Lemma 2.5, we have \(u \in W\), then \(I(u)>0\) and \(E(u)< d\) for all \(t\in (0,T_0)\). Therefore,
\begin{eqnarray} &&d>E(u)=\frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}-\frac{1}{p+1}||u ||^{p+1}_{p+1}+ \frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&>(\frac{1}{2}-\frac{1}{p+1})|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}+\frac{1}{p+1}I(u)\nonumber\\ &&>(\frac{1}{2}-\frac{1}{p+1})|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}\label{3.1} \end{eqnarray}
(13)
for all \(t\in (0,T_0)\). Then, (13) and (7) imply
\begin{eqnarray} || u ||^{2}_{H^1_{\Gamma_0}(\Omega)}&<&\frac{2(p+1)d}{p-1},||u||^{p+1}_{p+1}\nonumber\\ &<&c_*^{p+1}(\frac{2(p+1)d}{p-1})^\frac{p+1}{2},||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &<&B_*^{k+1}(\frac{2(p+1)d}{p-1})^\frac{k+1}{2}\label{3.2} \end{eqnarray}
(14)
for all \(t\in (0,T)\). By (8) and the definition of \(E(u)\), we have
\begin{eqnarray} &\frac{1}{2}|| u_{t} ||^{2}+\frac{1}{2}||u||^2_{H^1_{\Gamma_0}(\Omega)}\le E(0)+\frac{1}{p+1}||u ||^{p+1}_{p+1} -\frac{1}{k+1}||u||^{k+1}_{k+1,\Gamma_1}< C < + \infty\label{3.3} \end{eqnarray}
(15)
for all \(t\in (0,T)\). It follows from (15) and from a standard continuous argument that local weak solution $u$ furnished by Theorem 2.1 can be extended to the whole internal \([0,\infty)\), that is to say, \(u\) is a global solution.

Theorem 3.2. Suppose that assumption (6) holds, \(u(0)=u_0 \in H^1_{\Gamma_0}(\Omega)\) and \(u\) is a local solution of probelem (1). If \(E(0)< 0\), then the solution of the system (1) blows up in finite time.

Proof. We set

\begin{align} & H(t)=-E(t).\label{3.4} \end{align}
(16)
By the definition of \(H(t)\) and (9),
\begin{align}& H'(t)=-E'(t) \ge 0. \label{3.5} \end{align}
(17)
Consequently, by \(E(0)< 0\), we have
\begin{align} & H(0)=-E(0)>0.\label{3.6} \end{align}
(18)
It is clear that by (17) and (18)
\begin{align} &0< H(0)\le H(t).\label{3.7} \end{align}
(19)
By (16) and the expression of \(E(t)\),
\begin{align}& H(t)-\frac{1}{p+1}||u||^{p+1}_{p+1} +\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}=-\frac{1}{2}||\bigtriangledown u||^2< 0.\label{3.8} \end{align}
(20)
One implies
\begin{align} & 0< H(0)\le H(t)\le \frac{1}{p+1}||u||^{p+1}_{p+1} -\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &\le \frac{1}{p+1}||u||^{p+1}_{p+1}\le \frac{1}{p+1}||u||^{p+1}_{p+1} +\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}.\label{3.9} \end{align}
(21)
Let us define the functional
\begin{align} & L(t)= H^{1-\sigma}(t)+\frac{\epsilon}{2}||\bigtriangledown u||^2+\frac{\epsilon}{2}||u||^2,\label{3.10} \end{align}
(22)
where \(\epsilon>0\) will be fixed in later and \(0< \sigma \le \frac{p+1-q}{p+1}\) (this can be done since \(q-1< p\)). By taking the time derivative of (21), using problem (1), and performing several integration by parts, we get
\begin{align} & L'(t)= (1-\sigma) H^{-\sigma}(t) H'(t)+\epsilon \int _\Omega \bigtriangledown u \bigtriangledown u_tdx +\epsilon \int _\Omega uu_tdx \nonumber\\ &= (1-\sigma) H^{-\sigma}(t) H'(t)+\epsilon \int _\Omega [uu_t-u\Delta u_t]dx+\epsilon \int _{\Gamma_1} u_t\frac{\partial u}{\partial \nu }dx \nonumber\\ &= (1-\sigma) H^{-\sigma}(t) H'(t)+2\epsilon H(t)+2\epsilon E(t)-\epsilon||\bigtriangledown u||^2\nonumber\\ &+\epsilon ||u||^{p+1}_{p+1}+\epsilon||u||^{k+1}_{k+1,\Gamma_1} -\epsilon\int _ \Omega|u|^{q-2}uu_tdx+\epsilon \int _{\Gamma_1} u_t|u|^{k-1}udx\nonumber\\ &= (1-\sigma) H^{-\sigma}(t) H'(t)+2\epsilon H(t)+\epsilon (1-\frac{2}{p+1})||u||^{p+1}_{p+1}\nonumber\\ &+\epsilon (1+\frac{2}{k+1})||u||^{k+1}_{k+1, \Gamma_1}-\epsilon\int _ \Omega|u|^{q-2}uu_tdx+\epsilon \int _{\Gamma_1} u_t|u|^{k-1}udx.\label{3.11} \end{align}
(23)
To estimate the last two terms in the right-hand side of (23), by the following Young's inequality \begin{align} &ab \le \delta^{-1}a^2+\delta b^2,\nonumber \end{align} we deduce that, for any \(\delta_1>0\) and \(\delta_2>0\), \begin{align} &\int _ \Omega|u|^{q-2}uu_tdx=\int _ \Omega(|u|^{\frac{q-2}{2}}u_t)(|u|^{\frac{q-2}{2}}u)dx \le \delta_1^{-1}\int _ \Omega|u|^{q-2}u^2_tdx +\delta_1\int _ \Omega|u|^{q}dx,\nonumber\\ &\int _ {\Gamma_1}|u|^{k-1}uu_tdx\nonumber\\ &=\int _ {\Gamma_1}(|u|^{\frac{k-1}{2}}u_t)(|u|^{\frac{k-1}{2}}u)dx \le \delta_2^{-1}\int _{\Gamma_1}|u|^{k-1}u^2_tdx +\delta_2\int _ {\Gamma_1}|u|^{k+1}dx.\nonumber \end{align} Therefore, we have
\begin{align} &L'(t)\ge (1-\sigma) H^{-\sigma}(t) H'(t)+2\epsilon H(t) +\epsilon (1-\frac{2}{p+1})||u||^{p+1}_{p+1}\nonumber\\ &+\epsilon (1+\frac{2}{k+1})||u||^{k+1}_{k+1, \Gamma_1}\nonumber\\ &-\epsilon \delta_1 ||u||^{q}_{q}-\epsilon \delta_2||u||^{k+1}_{k+1,\Gamma_1} -\epsilon \delta_1^{-1}\int _ \Omega|u|^{q-2}u^2_tdx-\epsilon \delta_2^{-1}\int _{\Gamma_1}|u|^{k-1}u^2_tdx.\label{3.12} \end{align}
(24)
By choosing \(\delta_1\) such that \(\delta_1^{-1}=M_1 H^{-\sigma}(t)\) for \(M_1\) enough large constants to be fixed later, and noting that $$ -\int _ \Omega|u|^{q-2}u^2_tdx\ge -H'(t), -\int _{\Gamma_1}|u|^{k-1}u^2_tdx \ge -H'(t) $$ by (9) and (17), we have
\begin{align} &L'(t)\ge [(1-\sigma- \epsilon M_1) H^{-\sigma}(t)-\epsilon \delta_2^{-1}]H'(t)+2\epsilon H(t) +\epsilon (1-\frac{2}{p+1})||u||^{p+1}_{p+1}\nonumber\\ &+\epsilon (1+\frac{2}{k+1}-\delta_2)||u||^{k+1}_{k+1, \Gamma_1}-\epsilon M_1^{-1} H^{\sigma}(t) ||u||^{q}_{q}.\label{3.13} \end{align}
(26)
Taking into account (21) and the embedding \(L^{p+1}(\Omega)\hookrightarrow L^{q}(\Omega)\), we get
\begin{align} H^{\sigma}(t) ||u||^{q}_{q}\le C_1||u||^{(p+1)\sigma}_{p+1}||u||^{q}_{q} \le C_2||u||^{(p+1)\sigma+q}_{p+1},\label{3.14} \end{align}
(27)
for some positive constants \(C_1\) and \(C_2\). Now apply the inequality
\begin{align} x^l\le (x+1)\le (1+\frac{1}{z})(x+z),x\ge 0,0\le l\le 1,z>0,\label{3.15} \end{align}
(28)
in particular, taking \(x=||u||^{p+1}_{p+1},l=\frac{(p+1)\sigma+q}{p+1},z=H(0)\), we obtain
\begin{align} ||u||^{(p+1)\sigma+q}_{p+1}=(||u||^{p+1}_{p+1})^l\le (1+\frac{1}{H(0)})(||u||^{p+1}_{p+1}+ H(0))\le C_3||u||^{p+1}_{p+1}.\label{3.16} \end{align}
(29)
where we have used the fact that \(0< \frac {q}{p+1}< 1\), \(0< \sigma \le \frac{p+1-q}{p+1}\) and (21).
By (25), (26) and (28), we have
\begin{align} &L'(t)\ge [(1-\sigma- \epsilon M_1) H^{-\sigma}(t)-\epsilon \delta_2^{-1}]H'(t)+2\epsilon H(t)\nonumber\\ &+\epsilon (1-\frac{2}{p+1}-C_3M_1^{-1})||u||^{p+1}_{p+1}+\epsilon (1+\frac{2}{k+1}-\delta_2)||u||^{k+1}_{k+1, \Gamma_1}.\label{3.17} \end{align}
(29)
Now, we take \(\delta_2\) such that \(1+\frac{2}{k+1}-\delta_2>0\), and we take \(M_1\) large enough such that \(1- \frac {2}{p+1}-C_3M_1^{-1}=C_{4}>0\). Once \(M_1\) and \(\delta_2\) are fixed, we can pick \(\epsilon\) small enough such that \begin{align} &1-\sigma-\epsilon M_1>0,\nonumber\\ &(1-\sigma- \epsilon M_1) H^{-\sigma}(t)-\epsilon \delta_2^{-1}>(1-\sigma- \epsilon M_1) H^{-\sigma}(0)-\epsilon \delta_2^{-1} >0,\nonumber \end{align} where we have used the fact that \(H^{-\sigma}(t)>H^{-\sigma}(0)\). Then there exist \(C_5>0\) such that (29) becomes
\begin{align} L'(t)\ge C_5( H(t) +||u||^{p+1}_{p+1}+||u||^{k+1}_{k+1, \Gamma_1}).\label{3.18} \end{align}
(30)
Then, we have \begin{align} & L(t)\ge L(0) \ge 0.\nonumber \end{align} On the other hand, by the definition of \(L(t)\) and (20), we have \begin{align} & L(t)= H^{1-\sigma}(t)-\epsilon(H(t)-\frac{1}{p+1}||u||^{p+1}_{p+1} +\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1})+ \frac {\epsilon}{2}||u||^{2}\nonumber\\ &\le (1-\epsilon )H^{1-\sigma}(t)+\frac{\epsilon}{p+1}||u||^{p+1}_{p+1}-\frac{\epsilon}{k+1}||u||^{k+1}_{k+1,\Gamma_1} + \frac {\epsilon}{2}||u||^{2}\nonumber\\ &\le (1-\epsilon )H^{1-\sigma}(t)+\frac{\epsilon}{p+1}||u||^{p+1}_{p+1}+ \frac {\epsilon}{2}||u||^{2}, \nonumber \end{align} where we have used the fact \(H(t)\ge H^{1-\sigma}(t)\) (this can be ensured by (18), (19), \(0< \sigma < 1\) and that \(E(0)\) is sufficient negative). By the inequality (27) with \(x=||u||^{\frac{p+1}{1-\sigma}}_{p+1}\), \(l=1-\sigma< 1\), \(z=H^{\frac {1}{1-\sigma}}(0)\), we have
\begin{align} ||u||^{p+1}_{p+1}=(||u||^{\frac{p+1}{1-\sigma}}_{p+1})^{1-\sigma}\le (1+\frac{1}{H^{\frac {1}{1-\sigma}}(0)})(||u||^{\frac{p+1}{1-\sigma}}_{p+1}+ H^{\frac {1}{1-\sigma}}(0))\le C_6||u||^{\frac{p+1}{1-\sigma}}_{p+1}.\label{3.19} \end{align}
(31)
Therefore, we get \begin{align} L(t)\le (1-\epsilon )H^{1-\sigma}(t)+C_6||u||^{\frac{p+1}{1-\sigma}}_{p+1}+ \frac {\epsilon}{2}||u||^{2}. \nonumber \end{align} Then, by the embedding \(L^{p+1}(\Omega)\hookrightarrow L^{2}(\Omega)\), we have, for fixed \(\epsilon\) sufficient small,
\begin{align} L^{\frac{1}{1-\sigma}}(t) \le C_{7} [H(t)+||u||^{p+1}_{p+1}+ ||u||^\frac{2}{1-\sigma}_{p+1}]. \label{3.20} \end{align}
(32)
Using again the inequality (27) with \(x=||u||^{p+1}_{p+1}\),\(l=\frac{2}{(p+1)(1-\sigma)}< 1\) (since \( \sigma < \frac{p+1-q}{p+1} < \frac{p-1}{p+1}\)),\(z=H(0)\), we have
\begin{align} ||u||^\frac{2}{1-\sigma}_{p+1}= (||u||^{p+1}_{p+1})^\frac{2}{(p+1)(1-\sigma)} \le (1+\frac{1}{H(0)})(||u||^{p+1}_{p+1}+ H(0))\le C_8||u||^{p+1}_{p+1}.\label{3.21} \end{align}
(33)
From (32) and (33), we obtain
\begin{align} L^{\frac{1}{1-\sigma}}(t) \le C_{9} (H(t)+||u||^{p+1}_{p+1}) \le C_{9}( H(t) +||u||^{p+1}_{p+1}+||u||^{k+1}_{k+1, \Gamma_1}). \label{3.22} \end{align}
(34)
Combining with (30) and (34), we arrive that
\begin{align} L'(t)\ge C_{10}L^{\frac{1}{1-\sigma}}(t).\label{3.23} \end{align}
(35)
Integration of (35) between 0 and \(t\) gives the desired results. The theorem is proved.

In the following, we will prove that the solution will blow up provided that the initial energy \(E(0)>0\). The next lemma will play an essential role in our proving and it is similar to a lemma used firstly by [56]. Now the main idea of the proof is from Lemma 9.1 in [44].

Lemma 3.3. Let \(u\) be a solution of problem (1). Suppose that the assumption of \(k,p\) hold. Further assume that \(E(0)< E_1\) and \(||u(0)||_{H^1_{\Gamma_0}(\Omega)}> r_0\). Then there exists a constant \(r_1>r_0\) such that \(||u(t)||_{H^1_{\Gamma_0}(\Omega)}\ge r_1\), and $$\frac{1}{p+1}||u ||^{p+1}_{p+1}+\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\ge \frac{1}{2}r_1^2 - F(r_1)=\frac{c_*^{p+1}}{p+1}r_1^{p+1}+\frac{B_*^{k+1}}{k+1} r_1^{k+1}.$$

Proof. We observe from (11) that

\begin{align} E(u(t))\ge F(||u||_{H^1_{\Gamma_0}(\Omega)}). \label{3.24} \end{align}
(36)
We have that \(F(r)\) is increasing for \(0 < r < r_0\), decreasing for \(r > r_0\), \(F(r_0) = E_1\), and \(\lim \limits_{r \rightarrow + \infty}F(r)=- \infty\). Then, since \(d\ge E_1>E(u(0))\ge F(||u(0)||_{H^1_{\Gamma_0}(\Omega)})\ge F(0)=0\), there exist \(r_1'< r_0< r_1\), which verify
\begin{align} F(r_1)=F(r_1')=E(u(0)).\label{3.25} \end{align}
(37)
Considering that \(E(t)\) is non-increasing, we have
\begin{align} E(u(t))\le E(u(0)).\label{3.26} \end{align}
(38)
From (37) and (38) we have
\begin{align} F(||u(0)||_{H^1_{\Gamma_0}(\Omega)})\le E(u(0))=F(r_1).\label{3.27} \end{align}
(39)
Since \(||u(0)||_{H^1_{\Gamma_0}(\Omega)}, r_1\in (r_0, +\infty)\) and \(F(r)\) is deceasing in this interval, from (39) one has
\begin{align} ||u(0)||_{H^1_{\Gamma_0}(\Omega)}\ge r_1.\label{3.28} \end{align}
(40)
In the sequel, we will prove that
\begin{align} ||u(t)||_{H^1_{\Gamma_0}(\Omega)}\ge r_1.\label{3.29} \end{align}
(41)
In fact, we will argue by contradiction. Supposing that (41) does not hold, then, there exists \(t^* \in (0,T_0)\) such that
\begin{align} ||u(t^*)||_{H^1_{\Gamma_0}(\Omega)}< r_1.\label{3.30} \end{align}
(42)
If \(||u(t^*)||_{H^1_{\Gamma_0}(\Omega)}> r_0\), then,from (36), (37) and (42), we have
\begin{align} E(u(t^*))\ge F(||u(t^*)||_{H^1_{\Gamma_0}(\Omega)})>F(r_1)=E(u(0)), \nonumber \end{align}
(43)
which contradicts (38) and proves (41). Now, if \(||u(t^*)||_{H^1_{\Gamma_0}(\Omega)}\le r_0\), we have, taking (40) into account, that there exists \(r_2\) which verifies
\begin{align} ||u(t^*)||_{H^1_{\Gamma_0}(\Omega)})\le r_0< r_2< r_1\le ||u(0)||_{H^1_{\Gamma_0}(\Omega)}.\label{3.31} \end{align}
(43)
Consequently, from the continuity of \(||u(.)||_{H^1_{\Gamma_0}(\Omega)}\), there exists \(t'\in (0,t^*)\) verifying $$||u(t')||_{H^1_{\Gamma_0}(\Omega)}=r_2.$$ From the last identity and from (36), (37) and (43), we obtain \begin{align} E(u(t'))\ge F(||u(t')||_{H^1_{\Gamma_0}(\Omega)})>F(r_2)>F(r_1)=E(u(0)),\nonumber \end{align} which also contradicts (38) and proves (41). On the other hand, from the identity of the energy, it holds that
\begin{eqnarray} &&\frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}\le E(u(0))+\frac{1}{p+1}||u ||^{p+1}_{p+1}- \frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\le E(u(0))+\frac{1}{p+1}||u ||^{p+1}_{p+1}+\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1},\label{3.32} \end{eqnarray}
(44)
which implies, from (37), (41) and by the definition of \(F\) , that \begin{eqnarray} &&\frac{1}{p+1}||u ||^{p+1}_{p+1}+\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\ge \frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}- E(u(0))\nonumber\\ &&\ge \frac{1}{2}r_1^2 - F(r_1)=\frac{c_*^{p+1}}{p+1}r_1^{p+1}+\frac{B_*^{k+1}}{k+1} r_1^{k+1}.\nonumber \end{eqnarray}

Theorem 3.4. Suppose that the assumption (6) holds, \(u(0)=u_0 \in H^1_{\Gamma_0}(\Omega)\) and \(u\) is a local solution of the system (1), \(||u_0||_{H^1_{\Gamma_0}(\Omega)}> r_0\) and \(E(0)< E_1\). Then the solution of problem (1) blows up.

Proof. We set

\begin{align} & H(t)=E_2-E(t),\label{3.33} \end{align}
(45)
where \(E_2\) is a constant and \(E(0)< E_2< E_1< d\). By the definition of \(H(t)\) and (9)
\begin{align}& H'(t)=-E'(t) \ge 0 ,\label{3.34} \end{align}
(46)
which implies that \(H(t)\) is non-decreasing, and, consequently,
\begin{align} &H(t)\ge H(0)=E_2-E(0)>0.\label{3.35} \end{align}
(47)
Considering Lemma 3.3, we have that \(||u(t)||_{H^1_{\Gamma_0}(\Omega)}\ge r_1\), for some \(r_1 >r_0\). From this inequality, the definition of the energy and taking (45) into account, we deduce \begin{align}& H(t)=E_2-[\frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}-\frac{1}{p+1}||u ||^{p+1}_{p+1}+\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1} ]\nonumber\\ &\le E_1-\frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}+\frac{1}{p+1}||u ||^{p+1}_{p+1}-\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &\le E_1-\frac{1}{2}r_1^2+\frac{1}{p+1}||u ||^{p+1}_{p+1}-\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1},\nonumber \end{align} which implies, having in mind that \(E_1=F(r_0)=\frac{1}{2}r_0^2-\frac{c_{*}^{p+1}}{p+1}r_0^{p+1} -\frac{B_{*}^{k+1}}{k+1}r_0^{k+1}\), that
\begin{align} &H(t)\le \frac{1}{2}r_0^2-\frac{c_{*}^{p+1}}{p+1}r_0^{p+1} -\frac{B_{*}^{k+1}}{k+1}r_0^{k+1}-\frac{1}{2}r_1^2+\frac{1}{p+1}||u ||^{p+1}_{p+1}-\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &\le -\frac{c_{*}^{p+1}}{p+1}r_0^{p+1} -\frac{B_{*}^{k+1}}{k+1}r_0^{k+1}+\frac{1}{p+1}||u ||^{p+1}_{p+1}-\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &\le \frac{1}{p+1}||u ||^{p+1}_{p+1}-\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &\le \frac{1}{p+1}||u ||^{p+1}_{p+1}\le \frac{1}{p+1}||u ||^{p+1}_{p+1}+\frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}.\label{3.36} \end{align}
(48)
Then we can prove the theorem similar to the proof of Theorem 3.2.

4. Asymptotic stability

In this section, we will state and prove the exponential decay of the solutions to problem (1). In this context, we have the following lemma.

Lemma 4.1. Let \(u\) be a solution to problem (1). Assume that assumption (6) holds and \(u_0 \in W\), then we have

\begin{equation} || u ||^{2}_{H^1_{\Gamma_0}(\Omega)}\le \frac{2(p+1)}{p-1}E(t)\le \frac{2(p+1)}{p-1}E(0),\label{4.1} \end{equation}
(49)
\begin{equation} ||u||^{k+1}_{k+1,\Gamma_1}\le B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} || u ||^{2}_{H^1_{\Gamma_0}(\Omega)},\label{4.2} \end{equation}
(50)
\begin{equation} ||u ||^{p+1}_{p+1}\le c_*^{p+1}(\frac{2(p+1)}{p-1}E(0))^{p-2} || u ||^{2}_{H^1_{\Gamma_0}(\Omega)}.\label{4.3} \end{equation}
(51)

Proof. By Lemma 2.5, we have \(u\in W\) and \(I(u)>0\). We know from (9) and the definition of \(E(t)\) that \begin{eqnarray} &&E(0)\ge E(u)=\frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}-\frac{1}{p+1}||u ||^{p+1}_{p+1}+ \frac{1}{k+1} ||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\ge \frac{1}{2}|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}-\frac{1}{p+1}||u ||^{p+1}_{p+1}+ \frac{1}{p+1} ||u||^{k+1}_{k+1,\Gamma_1}+( \frac{1}{k+1}- \frac{1}{p+1} )||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\ge (\frac{1}{2}-\frac{1}{p+1})|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}+\frac{1}{p+1}I(u)+( \frac{1}{k+1}- \frac{1}{p+1} )||u||^{k+1}_{k+1,\Gamma_1}\nonumber\\ &&\ge (\frac{1}{2}-\frac{1}{p+1})|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}+\frac{p-k}{(k+1)(p+1)}||u||^{k+1}_{k+1,\Gamma_1}.\nonumber \end{eqnarray} Thus we obtain (49). By the embedding \(H^1_{\Gamma_0}(\Omega)\hookrightarrow L^{k+1}(\Gamma_1)\) and (49), we have \begin{align}& ||u||^{k+1}_{k+1,\Gamma_1}\le B_*^{k+1}|| u ||^{k+1}_{H^1_{\Gamma_0}(\Omega)}\le B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} || u ||^{2}_{H^1_{\Gamma_0}(\Omega)}.\nonumber \end{align} Then, (50) holds. By the embedding \(H^1_{\Gamma_0}(\Omega)\hookrightarrow L^{p+1}(\Omega)\) and (49), we have \begin{align}& ||u||^{p+1}_{p+1}\le c_*^{p+1}|| u ||^{p+1}_{H^1_{\Gamma_0}(\Omega)}\le c_*^{p+1}(\frac{2(p+1)}{p-1}E(0))^{p-2} || u ||^{2}_{H^1_{\Gamma_0}(\Omega)},\nonumber \end{align} Then, we conclude (51). Hence, we complete the proof. Now, we state an important lemma by Martinez [54].

Lemma 4.2. Let \(E:R^+\rightarrow R^+\) be a nonincreasing function. Assume that there exists \(\sigma>0\) for which \(\int_S^{+\infty}E(t)dt\le \sigma E(S)\) for any \(S\ge 0\), then there exist two positive constants \(C\) and \(\xi\) independent of t such that: $$0< E(t)\le Ce^{-\xi t }.$$

Theorem 4.3. Assume that assumption (6) holds and \(u_0 \in W\). Moreover, assume that \(E(0)< d\) and \(B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{(p+1)(k-1)}{(p-1)(k+1)}+ c_*^{p+1}(\frac{2(p+1)}{p-1}E(0))^{p-2}=\alpha< 1\), then there exist two positive constants \(\hat{C}\) and \(\xi\) independent of \(t\) such that: $$0< E(t)\le \hat{C}e^{-\xi t }.$$

Proof. Multiplying the first equation in problem (1) by \(u\), then integrating it over \(\Omega\times (S,T)\), and performing several integration by parts, we get:

\begin{eqnarray} &&\int_S^T\int_{\Omega}[uu_t+\nabla u \nabla u_t+|u|^{q-2}uu_t]dxdt+\int_S^T|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}dt\nonumber\\ &&+\int_S^T||u||^{k+1}_{k+1,\Gamma_1}dt+k\int_S^T\int_{\Gamma_1}|u|^{k-1}uu_t dxdt =\int_S^T|| u ||^{p+1}_{p+1}dt.\label{4.4} \end{eqnarray}
(52)
From the definition of \(E(t)\) and equation (52), we obtain
\begin{eqnarray} &&2\int_S^T E(t)dt=\int_S^T[|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}+ \frac{2}{k+1}||u||^{k+1}_{k+1,\Gamma_1}+\frac{2}{p+1}|| u ||^{p+1}_{p+1} ]dt\nonumber\\ &&=-\int_S^T\int_{\Omega}[uu_t+\nabla u \nabla u_t]dxdt-\int_S^T\int_{\Omega}|u|^{q-2}uu_tdxdt\nonumber\\ &&-k\int_S^T\int_{\Gamma_1}|u|^{k-1}uu_t dxdt-\frac{k-1}{k+1}\int_S^T||u||^{k+1}_{k+1,\Gamma_1}dt+\frac{p-1}{p+1}\int_S^T|| u ||^{p+1}_{p+1}dt.\nonumber\\ &&\label{4.5} \end{eqnarray}
(53)
Now, we estimate every term on the right-hand side of (53). Employing Hölder's inequality, Young's inequality, (49) and (9), the first and second terms on the right-hand side of (53) can be estimated as follows, for \(\delta_1>0\),
s \begin{eqnarray} &&-\int_S^T\int_{\Omega}[uu_t+\nabla u \nabla u_t]dxdt\le \delta_1\int_S^T|| u ||^{2}_{H^1_{\Gamma_0}(\Omega)}dt+C(\delta_1)\int_S^T|| u_t||^{2}_{H^1_{\Gamma_0}(\Omega)}dt\nonumber\\ &&\le \delta_1 \frac{2(p+1)}{p-1}\int_S^TE(t)dt-C(\delta_1)\int_S^TE'(t)dt.\label{4.6} \end{eqnarray}
(54)
By Hölder's inequality, Young's inequality, (49) and (9), the third terms on the right-hand side of (53) can be estimated as follows, for \(\delta_2>0\),
\begin{eqnarray} &&-\int_S^T\int_{\Omega}|u|^{q-2}uu_tdxdt=-\int_S^T\int_{\Omega}(|u|^{\frac{q-2}{2}}u_t)|u|^{\frac{q}{2}}dxdt \nonumber\\ &&\le \delta_2\int_S^T ||u||^{q}_qdt +C(\delta_2) \int_S^T\int _ \Omega|u|^{q-2}u^2_tdxdt\nonumber\\ &&\le \delta_2c_*^q\frac{2(p+1)}{p-1}(\frac{2(p+1)}{p-1}E(0))^{q-2} \int_S^T E(t)dt - C(\delta_2) \int_S^T E'(t)dt,\label{4.7} \end{eqnarray}
(55)
where we used the embedding \(H^1_{\Gamma_0}(\Omega)\hookrightarrow L^q(\Omega)\) and (49).
Similar to the process of the proof of (55) and by (50), we have
\begin{eqnarray} &&-\int_S^T\int _ {\Gamma_1}|u|^{k-1}uu_tdxdt \le \int_S^T\int _ {\Gamma_1}|u|^{\frac{k+1}{2}}(|u|^{\frac{k-1}{2}}u_t)dxdt\nonumber\\ &&\le \delta_3 \int_S^T||u||^{k+1}_{k+1,\Gamma_1}dt+ C(\delta_3)\int_S^T \int _{\Gamma_1}|u|^{k-1}u^2_tdxdt\nonumber\\ &&\le \delta_3 B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{2(p+1)}{p-1} \int_S^T E(t)dt - C(\delta_3) \int_S^T E'(t)dt.\label{4.8} \end{eqnarray}
(56)
As for the fifth term on the right-hand side of (53), by (50) and (9), we arrive at
\begin{eqnarray} &&-\frac{k-1}{k+1}\int_S^T||u||^{k+1}_{k+1,\Gamma_1}dt \nonumber\\ &&\le 2 B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{(p+1)(k-1)}{(p-1)(k+1)} \int_S^T E(t)dt.\nonumber\\ &&\label{4.9} \end{eqnarray}
(57)
For the sixth term on the right-hand side of (53), by (50) and (9), we get
\begin{eqnarray} &&\frac{p-1}{p+1}\int_S^T||u||^{p+1}_{p+1}dt \le 2 c_*^{p+1}(\frac{2(p+1)}{p-1}E(0))^{p-2} \int_S^T E(t)dt.\label{4.10} \end{eqnarray}
(58)
Then, combining these estimates (54)-(58), (53) becomes
\begin{eqnarray} &&2\int_S^T E(t)dt\nonumber\\ &&\le [\delta_1 \frac{2(p+1)}{p-1}+\delta_2c_*^q\frac{2(p+1)}{p-1}(\frac{2(p+1)}{p-1}E(0))^{q-2}\nonumber\\ &&+\delta_3 B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{2(p+1)}{p-1}\nonumber\\ &&+2 B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{(p+1)(k-1)}{(p-1)(k+1)}\nonumber\\ &&+ 2 c_*^{p+1}(\frac{2(p+1)}{p-1}E(0))^{p-2} ]\int_S^T E(t)dt\nonumber\\ &&- (C(\delta_1)+C(\delta_2)+C(\delta_3)) \int_S^T E'(t)dt.\label{4.11} \end{eqnarray}
(60)
Note \(B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{(p+1)(k-1)}{(p-1)(k+1)}+ c_*^{p+1}(\frac{2(p+1)}{p-1}E(0))^{p-2}=\alpha< 1\), and choose \(\delta_1>0,\delta_2>0,\delta_3>0\) sufficiently small such that
\begin{align} 2- \delta_1 \frac{2(p+1)}{p-1}-\delta_2c_*^q\frac{2(p+1)}{p-1}(\frac{2(p+1)}{p-1}E(0))^{q-2}\nonumber\\ -\delta_3 B_*^{k+1}(\frac{2(p+1)}{p-1}E(0))^{k-2} \frac{2(p+1)}{p-1} -2\alpha >0. \end{align}
(60)
Hence, by (9), there exists a positive constant \(\sigma >0\) such that $$\int_S^{T}E(t)dt\le \sigma E(S), ~ for~ any ~S\ge 0.$$ By letting \(T\) go to \(+\infty\) on the left hand in the aforementioned inequality, one can easily deduce that Lemma 4.2 is satisfied. Hence the conclusion of Theorem 4.3 is established.

By Lemma 4.1, we have the following result

Corollary 4.4. Under the assumption of Theorem 4.3, there exist two positive constants \(C\) and \(\xi\) independent of \(t\) such that: $$|| u||_{H^1_{\Gamma_0}(\Omega)}\le Ce^{-\xi t }.$$

Remark 4.5. If \(g(u)\) is boundary source term and \(f(u)\) is absorptive term, we can also get the similar results.

5. Conclusions

This paper consider the initial boundary value problem of the generalized Boussinesq equation with nonlinear interior source and boundary absorptive terms. Under appropriate assumptions imposed on the source and boundary absorption terms, we establish global existence of solutions by using the potential well method combined with a standard continuous argument and we give sufficient conditions for the blow-up of solutions with positive and negative initial energy respectively in a finite time. It is different with the results in [4, 5]. We also give a general decay of the energy by an integral inequality in [54].

Acknowledgements

\noindent This work is supported by the National Natural Science Foundation of China (No.11801145).

Competing Interests

The authors declare that they have no competing interests.

References

  1. Sveshnikov, A. G., Al’shin, A. B., Korpusov, M. O., & Pletner, Y. D. (2007). Linear and nonlinear equations of Sobolev type. Fizmatlit, Moscow. [Google Scholor]
  2. Korpusov, M. O., & Sveshnikov, A. G. (2008). Sufficient close-to-necessary conditions for the blowup of solutions to a strongly nonlinear generalized Boussinesq equation. Computational Mathematics and Mathematical Physics, 48(9), 1591-1599. [Google Scholor]
  3. Al'shin, A. B., Korpusov, M. O., & Sveshnikov, A. G. (2011). Blow-up in nonlinear Sobolev type equations (Vol. 15). Walter de Gruyter. [Google Scholor]
  4. Korpusov, M. O., & Sveshnikov, A. G. (2008). Sufficient conditions for the blowup of a solution to the Boussinesq equation subject to a nonlinear Neumann boundary condition. Computational Mathematics and Mathematical Physics, 48(11), 2077-2080. [Google Scholor]
  5. Makarov, P. A. (2012). Blow-Up of the solution of the initial boundary-value problem for the generalized Boussinesq equation with nonlinear boundary condition. Mathematical Notes, 92(3-4), 519-531. [Google Scholor]
  6. Levine, H. A. (1973). Some nonexistence and instability theorems for solutions of formally parabolic equations of the form \(Pu_t=-Au+F(u)\). Archive for Rational Mechanics and Analysis, 51(5), 371-386. [Google Scholor]
  7. Kalantarov, V. K., & Ladyzhenskaya, O. A. (1978). The occurrence of collapse for quasilinear equations of parabolic and hyperbolic types. Journal of Soviet Mathematics, 10(1), 53-70. [Google Scholor]
  8. Karch, G. (1997). Asymptotic behaviour of solutions to some pseudoparabolic equations. Mathematical Methods in the Applied Sciences, 20(3), 271-289. [Google Scholor]
  9. Benjamin, T. B., Bona, J. L., & Mahony, J. J. (1972). Model equations for long waves in nonlinear dispersive systems. Phil. Trans. R. Soc. Lond. A, 272(1220), 47-78. [Google Scholor]
  10. Sun, C., & Yang, M. (2008). Dynamics of the nonclassical diffusion equations. Asymptotic Analysis, 59(1-2), 51-81. [Google Scholor]
  11. Payne, L. E., & Sattinger, D. H. (1975). Saddle points and instability of nonlinear hyperbolic equations. Israel Journal of Mathematics, 22(3-4), 273-303. [Google Scholor]
  12. Sattinger, D. H. (1968). On global solution of nonlinear hyperbolic equations. Archive for Rational Mechanics and Analysis, 30(2), 148-172. [Google Scholor]
  13. Yacheng, L., & Junsheng, Z. (2006). On potential wells and applications to semilinear hyperbolic equations and parabolic equations. Nonlinear Analysis: Theory, Methods & Applications, 64(12), 2665-2687. [Google Scholor]
  14. Xu, R., & Su, J. (2013). Global existence and finite time blow-up for a class of semilinear pseudo-parabolic equations. Journal of Functional Analysis, 264(12), 2732-2763. [Google Scholor]
  15. Korpusov, M. O., & Sveshnikov, A. G. (2006). Blow-up of solutions of nonlinear Sobolev type equations with cubic sources. Differential Equations, 42(3), 431-443. [Google Scholor]
  16. Korpusov, M. O. (2004). Blow-up of solutions of a class of strongly non-linear equations of Sobolev type. Izvestiya: Mathematics, 68(4), 783-832.[Google Scholor]
  17. Deng, K., & Levine, H. A. (2000). The role of critical exponents in blow-up theorems: the sequel. Journal of Mathematical Analysis and Applications, 243(1), 85-126. [Google Scholor]
  18. Messaoudi, S. A. (2002). A note on blow up of solutions of a quasilinear heat equation with vanishing initial energy. Journal of mathematical analysis and applications, 273(1), 243-247. [Google Scholor]
  19. Levine, H. A., Park, S. R., & Serrin, J. (1998). Global existence and nonexistence theorems for quasilinear evolution equations of formally parabolic type. journal of differential equations, 142(1), 212-229. [Google Scholor]
  20. Di, H., & Shang, Y. (2015). Global existence and nonexistence of solutions for the nonlinear pseudo‐parabolic equation with a memory term. Mathematical Methods in the Applied Sciences, 38(17), 3923-3936. [Google Scholor]
  21. Peng, X., Shang, Y., & Zheng, X. (2016). Blow-up phenomena for some nonlinear pseudo-parabolic equations. Applied Mathematics Letters, 56, 17-22. [Google Scholor]
  22. Chen, H., & Tian, S. (2015). Initial boundary value problem for a class of semilinear pseudo-parabolic equations with logarithmic nonlinearity. Journal of Differential Equations, 258(12), 4424-4442. [Google Scholor]
  23. Luo, P. (2015). Blow-up phenomena for a pseudo‐parabolic equation. Mathematical Methods in the Applied Sciences, 38(12), 2636-2641. [Google Scholor]
  24. Conti, M., & Marchini, E. M. (2016). A remark on nonclassical diffusion equations with memory. Applied Mathematics & Optimization, 73(1), 1-21. [Google Scholor]
  25. Korpusov, M. O., & Sveshnikov, A. G. (2005). Blow-up of solutions of a class of strongly non-linear dissipative wave equations of Sobolev type with sources. Izvestiya: Mathematics, 69(4), 733-770. [Google Scholor]
  26. Aristov, A. I. (2014). Modelling unsteady processes in semiconductors using a non-linear Sobolev equation. Izvestiya: Mathematics, 78(3), 427-442. [Google Scholor]
  27. Zhang, H., Lu, J., & Hu, Q. (2014). Exponential growth of solution of a strongly nonlinear generalized Boussinesq equation. Computers & Mathematics with Applications, 68(12), 1787-1793. [Google Scholor]
  28. Lu, J., Hu, Q., & Zhang, H. (2014). Blowup of Solution for a Class of Doubly Nonlinear Parabolic Systems. Journal of Function Spaces, 2014, Article ID:924596. [Google Scholor]
  29. Eden, A., Michaux, B., & Rakotoson, J. M. (1991). Doubly nonlinear parabolic-type equations as dynamical systems. Journal of Dynamics and Differential Equations, 3(1), 87-131. [Google Scholor]
  30. ElOuardi, H., & ElHachimi, A. (2006). Attractors for a class of doubly nonlinear parabolic systems. Electronic Journal of Qualitative Theory of Differential Equations, 2006(1), 1-15. [Google Scholor]
  31. Levine, H. A., & Sacks, P. E. (1984). Some existence and nonexistence theorems for solutions of degenerate parabolic equations. Journal of differential equations, 52(2), 135-161. [Google Scholor]
  32. Aristov, A. I. (2012). On the initial boundary-value problem for a nonlinear Sobolev-type equation with variable coefficient. Mathematical Notes, 91(5-6), 603-612. [Google Scholor]
  33. Korpusov, M. O. (2013). Solution blow-up for a class of parabolic equations with double nonlinearity. Sbornik: Mathematics, 204(3), 323-346. [Google Scholor]
  34. Truong, L. X., & Van Y, N. (2016). Exponential growth with \(L^p\)-norm of solutions for nonlinear heat equations with viscoelastic term. Applied Mathematics and Computation, 273, 656-663. [Google Scholor]
  35. Truong, L. X., & Van Y, N. (2016). On a class of nonlinear heat equations with viscoelastic term. Computers & Mathematics with Applications, 72(1), 216-232. [Google Scholor]
  36. Levine, H. A., Smith, R. A., & Payne, L. E. (1987). A potential well theory for the heat equation with a nonlinear boundary condition. Mathematical methods in the applied sciences, 9(1), 127-136. [Google Scholor]
  37. Vitillaro, E. (2005). Global existence for the heat equation with nonlinear dynamical boundary conditions. Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 135(1), 175-207. [Google Scholor]
  38. Fiscella, A., & Vitillaro, E. (2011). Local Hadamard well-posedness and blow-up for reaction-diffusion equations with non-linear dynamical boundary conditions. Discrete and Continuous Dynamical Systems, 33 (11), 5015-5047. [Google Scholor]
  39. Amann, H. (1988). Parabolic evolution equations and nonlinear boundary conditions. Journal of Differential Equations, 72(2), 201-269. [Google Scholor]
  40. Escher, J. (1989). Global existence and nonexistence for semilinear parabolic systems with nonlinear boundary conditions. Mathematische Annalen, 284(2), 285-305. [Google Scholor]
  41. Escher, J. (1993). Quasilinear parabolic systems with dynamical boundary conditions. Communications in partial differential equations, 18(7-8), 1309-1364. [Google Scholor]
  42. Escher, J. (1995). On the qualitative behaviour of some semilinear parabolic problems. Differential and Integral Equations, 8(2), 247-267. [Google Scholor]
  43. Carvalho, A. N., Oliva, S. M., Pereira, A. L., & Rodriguez-Bernal, A. (1997). Attractors for parabolic problems with nonlinear boundary conditions. Journal of Mathematical Analysis and Applications, 207(2), 409-461. [Google Scholor]
  44. Cavalcanti, M. M., Cavalcanti, V. N. D., & Lasiecka, I. (2007). Well-posedness and optimal decay rates for the wave equation with nonlinear boundary damping–source interaction. Journal of Differential Equations, 236(2), 407-459. [Google Scholor]
  45. Lasiecka, I., & Tataru, D. (1993). Uniform boundary stabilization of semilinear wave equations with nonlinear boundary damping. Differential Integral Equations, 6(3), 507-533. [Google Scholor]
  46. Vitillaro, E. (2002). A potential well theory for the wave equation with nonlinear source and boundary damping terms. Glasgow Mathematical Journal, 44(3), 375-395. [Google Scholor]
  47. Vitillaro, E. (2002). Global existence for the wave equation with nonlinear boundary damping and source terms. Journal of Differential Equations, 186(1), 259-298. [Google Scholor]
  48. Bociu, L., & Lasiecka, I. (2010). Local Hadamard well-posedness for nonlinear wave equations with supercritical sources and damping. Journal of Differential Equations, 249(3), 654-683. [Google Scholor]
  49. Bociu, L., Rammaha, M., & Toundykov, D. (2011). On a wave equation with supercritical interior and boundary sources and damping terms. Mathematische Nachrichten, 284(16), 2032-2064. [Google Scholor]
  50. Wu, S. T. (2012). General decay and blow-up of solutions for a viscoelastic equation with nonlinear boundary damping-source interactions. Zeitschrift für angewandte Mathematik und Physik, 63(1), 65-106. [Google Scholor]
  51. Wu, S. T. (2014). Blow-up of positive initial energy solutions for a system of nonlinear wave equations with supercritical sources. Journal of Dynamical and Control Systems, 20(2), 207-227. [Google Scholor]
  52. Said-Houari, B., & Nascimento, F. A. F. (2013). Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction. Communications on Pure & Applied Analysis, 12(1), 375-403. [Google Scholor]
  53. Graber, P. J., & Said-Houari, B. (2012). Existence and asymptotic behavior of the wave equation with dynamic boundary conditions. Applied Mathematics & Optimization, 66(1), 81-122. [Google Scholor]
  54. Martinez, P. (1999). A new method to obtain decay rate estimates for dissipative systems. ESAIM: Control, Optimisation and Calculus of Variations, 4, 419-444. [Google Scholor]
  55. Lions, J. L. (1969). Quelques methodes de resolution des problemes aur limites non lineaires. Dunod Gauthier-villars, Paris. [Google Scholor]
  56. Vitillaro, E. (1999). Global nonexistence theorems for a class of evolution equations with dissipation. Archive for Rational Mechanics and Analysis, 149(2), 155-182. [Google Scholor]
]]>
Old symmetry problem revisited https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/old-symmetry-problem-revisited/ Thu, 29 Nov 2018 21:40:28 +0000 https://old.pisrt.org/?p=1541
OMA-Vol. 2 (2018), Issue 2, pp. 89–92 | Open Access Full-Text PDF
Alexander G. Ramm
Abstract:It is proved that if the problem \(\nabla^2u=1\) in \(D\), \(u|_S=0\), \(u_N=m:=|D|/|S|\) then \(D\) is a ball. There were at least two different proofs published of this result. The proof given in this paper is novel and short.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Old symmetry problem revisited

Alexander G. Ramm\(^1\)
Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA.; (A.G.R)
\(^{1}\)Corresponding Author; ramm@math.ksu.edu

Copyright © 2018 Alexander G. Ramm. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

It is proved that if the problem \(\nabla^2u=1\) in \(D\), \(u|_S=0\), \(u_N=m:=|D|/|S|\) then \(D\) is a ball. There were at least two different proofs published of this result. The proof given in this paper is novel and short.

Keywords:

Symmetry problems.

1. Introduction

Let \(D\) be bounded smooth connected domain in \(\mathbb{R}^3\), \(S\) be its boundary, \(N\) is the outer unit normal to \(S\), \(u_N\) is the normal derivative of \(u\) on \(S\), \(|D|\) is the volume of \(D\) and \(|S|\) is the suraface area of \(S\). Various symmetry problems were considered in [1, 2]. Consider the problem
\begin{equation}\label{e1} \nabla^2 u=1 \quad in \quad D, \quad u|_S=0, \quad u_N|_S=m=|D|/|S|. \end{equation}
(1)
Our result is the following:

Ttheorem 1.1. If problem (1) is solvable then \(D\) is a ball.

This result was proved by different methods in [3] and in [4]. The proof, given in the next section, is novel, short and is based on a new idea. We assume that \(D\subset \mathbb{R}^2\) so that \(S\) is a curve. Then the ballis a disc.

2. Proof of Theorem 1.1

Proof. Let \(s\) be the curve length, \(\bf{s}\) be the point on \(S\) corresponding to the parameter \(s\), \(\{x(s), y(s)\}\) be the parametric representation of \(S\), \({\bf{s}}=x(s)e_1+y(s)e_2\), where \(\{e_j\}|_{j=1,2}\) is a Cartesian basis in \(\mathbb{R}^2\). It is known that \(\frac{d{\bf{s}}}{ds}={\bf{t}}(s)\) is the tangent unit vector to \(S\) at the point \(\bf{s}\) and

\begin{equation}\label{e2} \frac{d\bf{t}}{ds}=k(s)\bf{\nu} (s), \end{equation}
(2)
where \(k(s)\ge 0\) is the curvature of \(S\) and \(\bf{\nu}(s)\) is the normal to \(S\). Since \(u_N=\nabla u\cdot N=m>0\) on \(S\) the convexity of \(S\) does not change sign, so \(\nu\) does not change sign, \(k(s)>0\) \(\forall s\in S\) and \(N(s)=-\bf{\nu}(s)\) \(\forall s\in S\). Differentiate the identity \(u(x(s),y(s))=0\) with respect to \(s\) and get \(\nabla u \cdot \bf{t}=0\). Differentiate this identity and use (1)-(2) to get
\begin{equation}\label{e3} u_{xx}t_1^2(s) +2u_{xy}t_1(s)t_2(s) +u_{yy}t_2^2(s)+\nabla u \cdot k(s)\nu (s)=0, \end{equation}
(3)
where \({\bf t} =t_1e_1+t_2e_2\). Rewrite (3) as
\begin{equation}\label{e4} u_{xx}t_1^2(s) +2u_{xy}t_1(s)t_2(s) +u_{yy}t_2^2(s)=m k(s). \end{equation}
(4)
Equation (4) holds in every coordinate system obtained from \(\{x,y\}\) by rotations. Clearly \(u_{xx}(s), u_{yy}(s), u_{xy}(s)\) cannot vanish simultaneously due to (4). Also \(u_{xx}(s), u_{yy}(s)\) cannot vanish simultaneously due to the first equation in (1). Equation (4) holds in any coordinate system obtained from a fixed Cartesian system by rotations. Equation (1) on the boundary yields:
\begin{equation}\label{e5} u_{xx}+u_{yy}=1. \end{equation}
(5)
We prove that (4) and (5) are not compatible (lead to a contradiction) except when \(S\) is a circle. Let \(u_{xx}:=p\), \(u_{xy}:=q\). Denote by \(A\) the \(2\times 2\) matrix with the elements \(A_{11}=p\),\(A_{22}=1-p\), where (5) was used, \(A_{12}=A_{21}=q\). Let \(I\) be the identity matrix. The equation \(\det (A-\lambda I)=\lambda^2 -\lambda -p^2-q^2+p=0\) has two solutions, so the eigenvalues of \(A\) are:
\begin{equation}\label{e6} \lambda_{\pm}=\frac 1 2 \pm (\frac 14+p^2+q^2-p)^{1/2}=\frac 1 2 \pm [(\frac 1 2-p)^2+q^2]^{1/2}. \end{equation}
(6)
The corresponding eigenvectors are
\begin{equation}\label{e7} e_1=\{1, \gamma\}, \quad e_2=\{-\gamma, 1\},\quad \gamma:=\frac q{p+\lambda_{+}-1}. \end{equation}
(7)
Note that \(\lambda_{+}+\lambda_{-}=1\), \(\lambda_{+} \lambda_{-}=-p^2-q^2+p.\) Thus, \(\lambda_{+}>0\). The eigenvectors are orthogonal: \(e_1\cdot e_2=0\) but not normalized: \(\|e_1\|^2=\|e_2\|^2=1+\gamma^2\). Since \(\|e_1\|^2\) is invariant under rotations of a Cartesian coordinate system, so is \(\gamma^2\). Let \(w:=\{t_1,t_2\}\). Then (4) implies
\begin{equation}\label{e8} (Aw,w)=mk(s)>0. \end{equation}
(8)
Since \(e_1\) and \(e_2\) form an orthogonal basis in \(\mathbb{R}^2\) one can find unique constants \(c_1,\, c_2\) such that
\begin{equation}\label{e9} c_1e_1+c_2e_2=w. \end{equation}
(9)
Solving this linear algebraic system for \(c_1,\, c_2\) one gets:
\begin{equation}\label{e10} c_1=\frac{t_1+\gamma t_2}{\Delta},\quad c_2= \frac{t_2-\gamma t_1}{\Delta}, \end{equation}
(10)
where \(\Delta=1+\gamma^2\) is the determinant of the matrix of the system (9). Substitute \(w\) from (9) into (8) and get:
\begin{equation}\label{e11} [c_1^2\lambda_{+}+c_2^2\lambda_{-}](1+\gamma^2)=mk(s)>0, \end{equation}
(11)
where we have used the relations: \(Ae_j=\lambda_je_j\), \(\lambda_1:=\lambda_+\), \(\lambda_2:=\lambda_-\), \((e_1,e_2)=0\), \(\|e_j\|^2=1+\gamma^2\), \((Ae_j,e_j)=\lambda_j(1+\gamma^2)\), \(j=1,2\). Using (10) one gets from (11):
\begin{equation}\label{e12} (t_1+\gamma t_2)^ 2 \lambda_{+}+(t_2-\gamma t_1)^2\lambda_{-}=mk(s)(1+\gamma^2)>0. \end{equation}
(12)
We prove that (12) leads to a contradiction unless \(S\) is a circle. Assume first that \(\lambda_{-}< 0\) and recall that \(\lambda_{+}>0\). Choose a point \(s\in S\) and the Cartesian coordinate system such that \(t_1(s)+\gamma(s) t_2(s)=0\). This is possible since \(\gamma^2\) is invariant under rotations and the only restriction on the real-valued \(t_1,\,t_2\) is the relation \(t_1^2+t_2^2=1\). Since \(\lambda_{-}< 0\) and \(t_2-\gamma t_1\neq 0\), we have a contradiction with inequality (12).

Assume now that \(\lambda_{-}\ge 0\) and \(\lambda_{-}\neq \lambda_{+}\). Then the left side of (12) is not a constant as a function of \(\{t_1,t_2\}\), that is, not a constant with respect to rotations of the coordinate system, while its right side is a constant. Thus, we have a contradiction.

Suppose finally that \(\lambda_{-}= \lambda_{+}\). Then \(\lambda_{-}= \lambda_{+}=\frac 1 2\) at any \(s\in S\). This implies by formula (6) that \(p=\frac 1 2\), \(u_{yy}=\frac 1 2\) and \(q=0\) on \(S\) for all \(s\in S\). By formula (7) one gets \(\gamma=0\), \(\|e_j\|=1\). Consequently, by formula (4), it follows that

\(\kappa(s)=\frac 1 {2m}\). Thus, the curvature of \(S\) is a constant, so \(S\) is a circle of a radius \(a\). Thus, \(m=\frac {\pi a^2}{2\pi a}=\frac a 2\), \(k(s)=\frac 1 a\) and the solution to problem (1) is \(u=\frac {|x|^2-a^2}{4}\). Obviously this \(u\) solves equation (1) and satisfies the first boundary condition in (1). The second boundary condition is also satisfied: \(u_N|_{S}=a/2\).

Theorem 1.1 is proved in the two-dimensional case. We leave to the reader to consider the three-dimensional case, see [5]. Theorem 1.1 is proved.

Competing Interests

The author declares that he has no competing interests.

References

  1. Ramm, A. G. (2005). Inverse problems: mathematical and analytical techniques with applications to engineering. Springer.
  2. Ramm, A. G. (2017). Scattering by obstacles and potentials. World Scientific Publishers, Singapore. [Google Scholor]
  3. Ramm, A. G. (2013). Symmetry problem. Proceedings of the American Mathematical Society, 141(2), 515-521. [Google Scholor]
  4. Serrin, J. (1971). A symmetry problem in potential theory. Archive for Rational Mechanics and Analysis, 43(4), 304-318. [Google Scholor]
  5. Ramm, A. G. (2018). Necessary and sufficient condition for a surface to be a sphere. Open J. Math. Anal., 2(2), 51-52.
]]>
Oscillation behavior of second order nonlinear dynamic equation with damping on time scales https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/oscillation-behavior-of-second-order-nonlinear-dynamic-equation-with-damping-on-time-scales/ Mon, 12 Nov 2018 14:02:17 +0000 https://old.pisrt.org/?p=1384
OMA-Vol. 2 (2018), Issue 2, pp. 78–88 | Open Access Full-Text PDF
Fanfan Li, Zhenlai Han
Abstract:In this paper, we use Riccati transformation technique to establish some new oscillation criteria for the second order nonlinear dynamic equation with damping on time scales $$(r(t)(x^\Delta(t))^\alpha)^\Delta-p(t)(x^\Delta(t))^\alpha+q(t)f(x(t))=0.$$ Our results not only generalize some existing results, but also can be applied to the oscillation problems that are not covered in literature. Finally, we give some examples to illustrate our main results.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Oscillation behavior of second order nonlinear dynamic equation with damping on time scales

Fanfan Li, Zhenlai Han\(^1\)
School of Mathematical Sciences, University of Jinan, Jinan, Shandong 250022, P R China.; (F.L & Z.H)

\(^{1}\)Corresponding Author; hanzhenlai@163.com

Copyright © 2018 Fanfan Li, Zhenlai Han. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we use Riccati transformation technique to establish some new oscillation criteria for the second order nonlinear dynamic equation with damping on time scales $$(r(t)(x^\Delta(t))^\alpha)^\Delta-p(t)(x^\Delta(t))^\alpha+q(t)f(x(t))=0.$$ Our results not only generalize some existing results, but also can be applied to the oscillation problems that are not covered in literature. Finally, we give some examples to illustrate our main results.

Keywords:

Dynamic equation on time scales; Oscillation; Dynamic equation; Damped.

1. Introduction

The calculus theory of time scales was introduced by Hilger [1] in order to unify, extend and generalize ideas from discrete calculus, quantum calculus and continuous calculus to arbitrary time scales calculus. A time scale \(\mathbb{T}\) is an arbitrary closed subset of the real numbers \(\mathbb{R}\). For an introduction to time scales calculus and dynamic equations, see Bohner and Peterson books [2, 3]. We are concerned with the oscillation behavior of all solutions of the second order nonlinear dynamic equation with damping on a time sceles \(\mathbb{T}\) which is unbounded above
\begin{equation}\label{e1.1} \begin{aligned} (r(t)(x^{\Delta}(t))^{\alpha})^{\Delta}-p(t)(x^{\Delta}(t))^{\alpha}+q(t)f(x(t))=0, \end{aligned} \end{equation}
(1)
where \(t\in\mathbb{T}\), \(t\geqslant t_{0}>0\). The equation will be studied under the following assumptions:
(H1) \(r(t),q(t)\) are positive real-valued rd-continuous functions on \(\mathbb{T}\), \(p(t)< 0,\) \(\frac{p(t)}{r(t)}\in\mathcal{R}^{+}\), and \(\alpha\) is the quotient of two positive odd numbers;
(H2) \(f:\mathbb{R}\rightarrow \mathbb{R}\) is such that \(uf(u)>0\) for \(u\neq 0\);
(H3) \(f:\mathbb{R}\rightarrow \mathbb{R}\) is such that \(f(u)\geqslant ku^{\alpha}\) for \(u\neq 0\) and some \(k>0\);
(H4) \(\int^{\infty}_{t_{0}}\left(\frac{1}{r(t)}e_{\ominus\frac{p(s)}{r(s)}}(t,t_{0}\right)^{\frac{1}{\alpha}}\Delta t=\infty\).
We only consider these solutions of (1) which exist on some half-line \([t_0,\infty)_{\mathbb{T}}\) and satisfy \(\sup\{|x(t)|:t_1\leqslant t< \infty\}>0\), for any \(t_1 \geqslant t_0\). If \(x(t)\) satisfies (1) on \([t_{1},\infty)_{\mathbb{T}}\) for some \(t_{1}\geqslant t_{0},\) then the function \(x(t)\) is called a solution of (1). A solution \(x(t)\) of (1) is said to be oscillatory if it is neither eventually positive nor eventually negative, otherwise it is called nonoscillatory. The equation itself is called oscillatory if all of its solutions are oscillatory. In the last decades, much interest has focused on obtaining sufficient conditions for the oscillation of solutions of different classes of dynamic equations on time scales, and we refer the reader to the papers [4, 5 , 6, 7, 8, 9, 10, 11, 12, 13, 14] In particular, much work has been done on the following dynamic equation $$(p(t)x^{\Delta}(t))^{\Delta}+q(t)x(\sigma(t))=0,$$ $$(p(t)x^{\Delta}(t))^{\Delta}+q(t)(f\circ x(\sigma(t)))=0.$$ Erbe et al. [15] considered the second-order nonlinear damped dynamic equation $$(r(t)(x^{\Delta}(t))^{\gamma})^{\Delta}+p(t)(x^{\Delta\sigma}(t))^{\gamma}+q(t)f(x(\tau(t)))=0,$$ and obtained some oscillation criteria. Saker et al. [16] studied the oscillation criteria for difference equations with damping terms $$\Delta(a_{n}(\Delta x_{n})^{\gamma})+p_{n}(\Delta x_{n})^{\gamma}+q_{n}f(x_{n+1})=0,$$ and obtained some oscillation criteria. Deng et al. 17 researched oscillation criteria for second order nonlinear delay dynamic equations $$(r(t)|x^{\Delta}(t)|^{\gamma-1}x^{\Delta}(t))^{\Delta}+p(t)f(x(\tau(t)))=0,$$ Agwo et al. [18] considered the oscillation criteria of second order half linear delay dynamic equation $$(r(t)g(x^{\Delta}(t)))^{\Delta}+p(t)f(x(\tau(t)))=0,$$ and obtained some oscillation criteria. Note that in the special case when \(\mathbb{T}=\mathbb{R}\), (1) becomes the second-order nonlinear damped differential equation $$(r(t)(x'(t))^{\alpha})'+p(t)(x'(t))^{\alpha}+q(t)f(x(t))=0,\ \ t\in\mathbb{R},$$ and when \(\mathbb{T}=\mathbb{Z}\), (1) becomes the second-order nonlinear damped difference equation $$\Delta(r(t)(\Delta x(t))^{\alpha})+p(t)(\Delta x(t))^{\alpha}+q(t)f(x(t))=0,\ \ t\in\mathbb{Z},$$ where \(\Delta x(t)=x(t+1)-x(t)\). In this paper, we replace \(e_{p}(t,s)\) with \(e_{\ominus p}(t,s)\) and this is difference between our paper and other articles. Our result extend and improve some well-known oscillation results. The paper is organized as follows. In Section 2, we present some basic definitions and useful results from the theory of calculus on time scales on which we rely in the later section. In section 3, we intend to use the Riccati transformation technique, integral averaging technique, and inequalities to obtain some sufficient conditions for oscillation for oscillation of every solution of (1). In section 4, we give a example to illustrate our results. The last section is devoted to remarks and comments concerning our results. We also formulate possible new research directions.

Preliminaries

Lemma 2.1.[2] We say that a function \(p:\mathbb{T}\rightarrow \mathbb{R}\) is regressive provided $$1+\mu(t)p(t)\neq 0$$ for all \(t\in {{\mathbb{T}}^{\kappa }}\) holds. We define the set \(\mathcal{R}^{+}\) of all positively regressive elements of \(\mathcal{R}\) by $$\mathcal{R}^{+}=\left\{p\in\mathcal{R}: 1+\mu(t)p(t)>0\ for \ all \ t\in\mathbb{T}\right\}.$$ \(\mathcal{R}^{+}\) is a subgroup of \(\mathcal{R}.\)

Lemma 2.2.[2] Show that if \(p\in\mathcal{R},\) then the function \(\ominus p\) defined by $$(\ominus p)(t):=-\frac{p(t)}{1+\mu(t)p(t)}$$ for all \(t\in \mathbb{T}^{\kappa }\) are also elements of \(\mathcal{R}\). Since \(p\in\mathcal{R}^{+},\) we have \(\ominus p\in\mathcal{R}^{+}\) by Lemma 2.1. If \(p\in\mathcal{R}^{+},\) then \(e_{p}(t,t_{0})>0\) for all \(t\in\mathbb{T}.\)

Lemma 2.3.[2] If \(p\in\mathcal{R},\) then we have define the exponential function by $$e_{p}(t,s)=\exp\left(\int^{t}_{s}\xi_{\mu(\tau)}(p(\tau))\Delta\tau\right)\ \ \ for \ \ s,t\in\mathbb{T},$$ where \(\xi_{h}(z)=\frac{1}{h}\log(1+zh),\ h>0.\)

Lemma 2.4.[2] If \(p,q\in\mathcal{R},\) then
(1) \(e_{p}(t,t)\equiv 1;\)
(2) \(e_{p}(\sigma(t),s)=(1+\mu(t)p(t))e_{p}(t,s);\)
(3) \(\frac{1}{e_{p}(t,s)}=e_{\ominus p}(t,s).x\)

Lemma 2.5.[2] Let \(y\in C_{rd}\) and \(p\in\mathcal{R}^{+},\) we have \begin{equation}\label{e2.5} \begin{aligned} (ye_{\ominus p}(\cdot, t_{0}))^{\Delta}(t)&=y^{\Delta}(t)e_{\ominus p}(\sigma(t),t)+y(t)(\ominus p)(t)e_{\ominus p}(t,t_{0})\\ &=y^{\Delta}(t)e_{\ominus p}(\sigma(t),t)+y(t)\frac{(\ominus p)(t)}{1+\mu(t)(\ominus p)(t)}e_{\ominus p}(\sigma(t),t)\\ &=\left(y^{\Delta}(t)-(\ominus(\ominus p))(t)y(t)\right)e_{\ominus p}(\sigma(t),t)\\ &=(y^{\Delta}(t)-p(t)y(t))e_{\ominus p}(\sigma(t),t). \nonumber \end{aligned} \end{equation}

Lemma 2.6.[6] Assume that \(\alpha>0\) is the ratio of positive odd integers and \(x^\alpha(t) \in C_{rd}^1(I,\mathbb{R})\). Then \begin{equation} (x^\alpha(t))^\Delta\geqslant \begin{cases} \alpha(x(\sigma(t)))^{\alpha-1}y^\Delta(t),\ \ \ 0< \alpha\leqslant 1,\\ \alpha(x(t))^{\alpha-1}x^\Delta(t),\ \ \ \ \ \ \ \alpha\geqslant 1.\nonumber \end{cases} \end{equation}

3. Main Results

Now, we are in a position to state and prove some new results which guarantee that every solution of (1) oscillates.

Theorem 3.1. Assume that (H1)-(H4) hold. Furthermore, assume that there exists a positive real rd-continuous differentiable function \(v(t)\) such that

\begin{equation}\label{e3.1} \begin{aligned} \underset{t\rightarrow \infty}{\lim\sup}\int^{t}_{t_{0}}\left(kv(s)q(s)-\frac{\psi^{\alpha+1}(s)r(s)}{(\alpha+1)^{\alpha+1}v^{\alpha}(s)}\right)\Delta s=\infty, \end{aligned} \end{equation}
(2)
where
\begin{equation}\label{e3.2} \begin{aligned} \psi(t)=\frac{r(t)v^{\Delta}(t)+v(t)p(t)}{r(t)} \end{aligned} \end{equation}
(3)
Then every solution of (1) is oscillatory.

Proof. Suppose to the contrary that \(x(t)\) is a nonoscillatory solution of (1). With loss of generality, we may assume that \(x(t)>0\) for \(t\geqslant t_{1}>t_{0}\). We shall consider only this cases, since in view of (H2), the proof of the case when \(x(t)\) is eventually negative is similar. Now, we claim that \(x^{\Delta}(t)\) has a fixed sign on the interval \([t_{2},\infty)\) for some \(t_{2}\geqslant t_{1}\). From (1), since \(q(t)>0\) and \(f(x(t))>0\), we have $$(r(t)(x^{\Delta}(t))^{\alpha})^{\Delta}-p(t)(x^{\Delta}(t))^{\alpha}=-q(t)f(x(t))< 0,$$ i.e., $$(r(t)(x^{\Delta}(t))^{\alpha})^{\Delta}-p(t)(x^{\Delta}(t))^{\alpha}< 0.$$ By setting \(y(t)=r(t)(x^{\Delta}(t))^{\alpha}\), we immediately see that \(y^{\Delta}(t)-\frac{p(t)}{r(t)}y(t)< 0\), by Lemma 2.2 and Lemma 2.3, we have \((y(t)e_{\ominus\frac{p}{r}})^{\Delta}< 0\). Then \(y(t)e_{\ominus\frac{p}{r}}\) is decreasing and thus \(y(t)\) is eventually of one sign. Then \(x^{\Delta}(t)\) has a fixed sign for all sufficiently large \(t\) and we have one of the following: \begin{equation} \begin{cases} \text{Case (1)}. \ x^{\Delta}(t)\ \text{is eventually positive}.\\ \text{Case (2)}. \ x^{\Delta}(t)\ \text{is eventually negative}.\nonumber \end{cases} \end{equation} First, we consider {Case (1).} \(x^{\Delta}(t)> 0\) on \([t_{2}, \infty)\) for some \(t_{2}\geqslant t_{1}\). Then in view (1) and (H1) we have $$x(t)>0, x^{\Delta}(t)> 0, (r(t)(x^{\Delta}(t))^{\alpha})^{\Delta}< 0,t\geqslant t_{2}.$$ We see that for \(t\geqslant t_{3}=\sigma (t_{2})\)

\begin{equation}\label{e3.3} \begin{aligned} r(t)(x^{\Delta}(t))^{\alpha}> r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}, x^{\alpha}(\sigma(t))> x^{\alpha}(t). \end{aligned} \end{equation}
(4)
Define the function \(w(t)\) by the Riccati substitution
\begin{equation}\label{e3.4} \begin{aligned} w(t):= v(t)r(t)\left(\frac{x^{\Delta}(t)}{x(t)}\right)^{\alpha}, t\geqslant t_{2}. \end{aligned} \end{equation}
(5)
In view of (1), we have
\begin{equation}\label{e3.5} \begin{aligned} w^{\Delta}(t)=&r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}\left(\frac{v(t)}{x^{\alpha}(t)}\right)^{\Delta}+\frac{v(t)(r(t)(x^{\Delta}(t))^{\alpha})^{\Delta}}{x^{\alpha}(t)}\\ =&r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}\frac{v^{\Delta}(t)x^{\alpha}(t)-v(t)(x^{\alpha}(t))^{\Delta}}{x^{\alpha}(t)x^{\alpha}(\sigma(t))}\\ &+\frac{v(t)}{x^{\alpha}(t)}(p(t)(x^{\Delta}(t))^{\alpha}-q(t)f(x(t)))\\ =&-\frac{v(t)q(t)f(x(t))}{x^{\alpha}(t)}+v(t)\frac{p(t)(x^{\Delta}(t))^{\alpha}}{x^{\alpha}(t)}+\frac{v^{\Delta}(t)}{v(\sigma(t))}w(\sigma(t))\\ &-\frac{v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}(x^{\alpha}(t))^{\Delta}}{x^{\alpha}(t)x^{\alpha}(\sigma(t))}, \end{aligned} \end{equation}
(6)
Using (4) in (6) and by (H3), we have \begin{equation} \begin{aligned} w^{\Delta}(t)\leqslant&-kv(t)q(t)+v(t)\frac{p(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}}{r(t)x^{\alpha}(\sigma(t))}+\frac{v^{\Delta}(t)}{v(\sigma(t))}w(\sigma(t))\\ &-\frac{v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}(x^{\alpha}(t))^{\Delta}}{x^{\alpha}(t)x^{\alpha}(\sigma(t))}\\ =&-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t))-\frac{v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}(x^{\alpha}(t))^{\Delta}}{x^{\alpha}(t)x^{\alpha}(\sigma(t))}, \nonumber \end{aligned} \end{equation} where \(\psi(t)\) as defined as (3). By Lemma 2.5, if \(0< \alpha\leqslant1\), we have
\begin{equation}\label{e3.6} \begin{aligned} w^{\Delta}(t)\leqslant&-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t)) -\frac{\alpha v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}x^{\Delta}(t)}{x^{\alpha}(t)x(\sigma(t))}\\ \leqslant&-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t))-\frac{\alpha v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}x^{\Delta}(t)}{x^{\alpha+1}(\sigma(t))}, \end{aligned} \end{equation}
(7)
if \(\alpha>1\), we have
\begin{equation}\label{e3.7} \begin{aligned} w^{\Delta}(t)\leqslant&-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t)) -\frac{\alpha v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}x^{\Delta}(t)}{x(t)x^{\alpha}(\sigma(t))}\\ \leqslant&-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t))-\frac{\alpha v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}x^{\Delta}(t)}{x^{\alpha+1}(\sigma(t))}. \end{aligned} \end{equation}
(8)
Thus, by (7) and (8), we obtain
\begin{equation}\label{e3.8} \begin{aligned} w^{\Delta}(t)&\leqslant -kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t))-\frac{\alpha v(t)r(\sigma(t))(x^{\Delta}(\sigma(t)))^{\alpha}x^{\Delta}(t)}{x^{\alpha+1}(\sigma(t))}\\ &=-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t))-\frac{\alpha v(t)r^{\frac{1}{\alpha}}(t)x^{\Delta}(t)}{v(\sigma(t))x(\sigma(t))r^{\frac{1}{\alpha}}(t)}w(\sigma(t))\\ &\leqslant-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t)) -\frac{\alpha v(t)r^{\frac{1}{\alpha}}(\sigma(t))x^{\Delta}(\sigma(t))}{v(\sigma(t))x(\sigma(t))r^{\frac{1}{\alpha}}(t)}w(\sigma(t))\\ &=-kv(t)q(t)+\frac{\psi(t)}{v(\sigma(t))}w(\sigma(t))-\frac{\alpha v(t)}{v^{\frac{\alpha+1}{\alpha}}(\sigma(t))r^{\frac{1}{\alpha}}(t)}w^{\frac{\alpha+1}{\alpha}}(\sigma(t)) \end{aligned} \end{equation}
(9)
hold for all \(\alpha>0\). Then, using the inequality [19] $$Bu-Cu^{\frac{1+\alpha}{\alpha}}\leqslant \frac{\alpha^{\alpha}}{(1+\alpha)^{\alpha+1}}\frac{B^{\alpha+1}}{C^{\alpha}},$$ let \(B=\frac{\psi(t)}{v(\sigma(t))},\) \(C=\frac{\alpha v(t)}{v^{\frac{\alpha+1}{\alpha}}(\sigma(t))r^{\frac{1}{\alpha}}(t)}\) and \(u=w(\sigma(t)),\) we obtain
\begin{equation}\label{e3.9} \begin{aligned} w^{\Delta}(t)&\leqslant -kv(t)q(t) +\frac{\alpha^{\alpha}}{(1+\alpha)^{1+\alpha}}\left(\frac{\psi(t)}{v(\sigma(t))}\right)^{\alpha+1} \left(\frac{v^{\frac{\alpha+1}{\alpha}}(\sigma(t))r^{\frac{1}{\alpha}}(t)}{\alpha v(t)}\right)^{\alpha}\\ &=-kv(t)q(t)+\frac{\psi^{\alpha+1}(t)r(t)}{(\alpha+1)^{\alpha+1}v^{\alpha}(t)}. \end{aligned} \end{equation}
(10)
Integrating (10) from \(t_{3}\) to \(t\), we obtain $$w(t)-w(t_{3})\leqslant-\int^{t}_{t_{3}}\left(kv(s)q(s)-\frac{\psi^{\alpha+1}(s)r(s)}{(\alpha+1)^{\alpha+1}v^{\alpha}(s)}\right)\Delta s,$$ which yields $$\int^{t}_{t_{3}}\left(kv(s)q(s)-\frac{\psi^{\alpha+1}(s)r(s)}{(\alpha+1)^{\alpha+1}v^{\alpha}(s)}\right)\Delta s\leqslant w(t_{3})-w(t)< w(t_{3})$$ for all large \(t\). This is contrary to (2). Next, we consider Case(2). Then exists \(t_{2}\geqslant t_{1}\) such that \((x^{\Delta}(t))^{\alpha}< 0\) for \(t\geqslant t_{2}\). Define the function \(u(t)=-r(t)(x^{\Delta}(t))^{\alpha}\). Then from (1), we have $$u^{\Delta}(t)-\frac{p(t)}{r(t)}u(t)\geqslant 0.$$ Thus $$u(t)\geqslant u(t_{2})e_{\ominus\frac{p(t)}{r(t)}}(t,t_{2}),$$ so that $$(x^{\Delta}(t))^{\alpha}\leqslant-u(t_{2})\left(\frac{1}{r(t)}e_{\ominus\frac{p(t)}{r(t)}}(t,t_{2})\right),$$ i.e.
\begin{equation}\label{e3.10} \begin{aligned} x^{\Delta}(t)\leqslant \left(-u(t_{2})\left(\frac{1}{r(t)}e_{\ominus\frac{p(t)}{r(t)}}(t,t_{2})\right)\right)^{\frac{1}{\alpha}}. \end{aligned} \end{equation}
(11)
Integrating (11) from \(t_{2}\) to \(t\), we get $$x(t)-x(t_{2})\leqslant(r(t_{2}))^{\frac{1}{\alpha}}x^{\Delta}(t_{2})\int^{t}_{t_{2}}\left(\frac{1}{r(s)}e_{\ominus\frac{p(m)}{r(m)}}(s,t_{2}) \right)^{\frac{1}{\alpha}}\Delta s.$$ Condition (H4) implies that \(x(t)\) is eventually negative, which is a contradiction. The proof is complete.

Corollary 3.2. Assume that (H1)-(H4) hold. If $$\limsup\limits_{t\rightarrow\infty}\int^{t}_{t_{0}}\left(kq(s)-\frac{p^{\alpha+1}(s)}{(\alpha+1)^{\alpha+1}r^{\alpha}(t)}\right)\Delta s=\infty,$$ the every solution of (1) is oscillatory.

Corollary 3.3. Assume that (H1)-(H4) hold. If there is \(\lambda \geqslant1\) such that $$\limsup\limits_{t\rightarrow\infty}\int^{t}_{t_{0}}\left(ks^{\lambda}q(s)-\frac{(r(s)(s^{\lambda})^{\Delta} -s^{\lambda}p(s))^{\alpha+1}}{(\alpha+1)^{\alpha+1}(s^{\lambda})^{\alpha}r^{\alpha}(s)}\right)\Delta s=\infty,$$ the every solution of (1) is oscillatory.

Corollary 3.4. Assume that (H1)-(H4) hold. If $$\limsup\limits_{t\rightarrow\infty}\int^{t}_{t_{0}}\left(kR(s,t_{0})q(s)-\frac{(r(s)(R(s,t_{0}))^{\Delta} -R(s,t_{0})p(s))^{\alpha+1}}{(\alpha+1)^{\alpha+1}(R(s,t_{0}))^{\alpha}r^{\alpha}(s)}\right)\Delta s=\infty,$$ where \(R(t,t_{0})=\int^{t}_{t_{0}}\frac{1}{r(s)}\Delta s,\) the every solution of (1) is oscillatory.

Theorem 3.5. Assume that (H1)-(H4) hold. Furthermore, suppose that \(v(t)\) be as defined in Theorem 3.1 and a function \(H\in C(\mathbb{D},\mathbb{T})\), where \(\mathbb{D}:=\{(t,s):t\geqslant s\geqslant t_{0}\},\) such that \begin{equation} \begin{aligned} &H(t,t)=0, \ \ for \ t\geqslant t_{0},\\ &H(t,s)>0, \ \ for \ (t,s)\in\mathbb{D}_{0}, \nonumber \end{aligned} \end{equation} where \(\mathbb{D}_{0}:=\{(t,s):t>s\geqslant t_{0}\}\), and \(H\) has a nonpositive continuous partial derivative \(H^{\Delta_{s}}(t,s):=\partial H(t,s)/\partial s\) on \(\mathbb{D}_{0}\) with respect to the second variable and satisfies

\begin{equation}\label{e3.11} \begin{aligned} \limsup\limits_{t\rightarrow\infty}\frac{1}{H(t,t_{0})}\int^{t}_{t_{0}}\left(kH(t,s)v(s)q(s) -\frac{v^{\alpha+1}(\sigma(t))r(s)A^{\alpha+1}(t,s)}{(1+\alpha)^{1+\alpha}H^{\alpha}(t,s)v^{\alpha}(s)}\right)\Delta s=\infty, \end{aligned} \end{equation}
(12)
where $$A(t,s)=H(t,s)\frac{\psi(s)}{v(\sigma(t))}+H^{\Delta_{s}}(t,s).$$ Then every solution of (1) is oscillatory.

Proof. Suppose to the contrary that \(x(t)\) is a nonoscillatory solution of (1) and let \(t_{1}\geqslant t_{0}\) be such that \(x(t)\neq0\) for all \(t\geqslant t_{1}\), so with loss of generality, we may assume that \(x(t)\) is an eventually positive solution of (1) with \(x(t)>0\) for \(t\geqslant t_{1}\) sufficiently large. In view of Theorem 3.1 we see that \(x^{\Delta}(t)\) is eventually negative or eventually positive. If \(x^{\Delta}(t)\) is eventually negative, we are then back to Case (2) of Theorem 3.1 and we obtain a contradiction. If \(x^{\Delta}(t)\) is eventually positive, we assume that there exists \(t_{2}> t_{1}\) such that \(x^{\Delta}(t)\geqslant 0\) for \(t_{2}\geqslant t_{1}\) and proceed as in the proof of Case (1) of Theorem 3.1 and get (9). From (9), it follows that

\begin{equation}\label{e3.12} \begin{aligned} \int^{t}_{t_{2}}kH(t,s)v(s)q(s)\Delta s\leqslant &-\int^{t}_{t_{2}}H(t,s)w^{\Delta}(s)\Delta s+\int^{t}_{t_{2}}H(t,s)\frac{\psi(s)}{v(\sigma(s))}w(\sigma(s))\Delta s\\ &-\int^{t}_{t_{2}}H(t,s)\frac{\alpha v(s)}{ v^{\frac{\alpha+1}{\alpha}}(\sigma(s))r^{\frac{1}{\alpha}}(s)}w^{\frac{\alpha+1}{\alpha}}(\sigma(s))\Delta s, \end{aligned} \end{equation}
(13)
Using the integration by parts formula, we have
\begin{equation}\label{e3.13} \begin{aligned} \int^{t}_{t_{2}}H(t,s)w^{\Delta}(s)\Delta s&=H(t,s)w(s)|^{t}_{t_{2}}-\int^{t}_{t_{2}}H^{\Delta_{s}}(t,s)w(\sigma(s))\Delta s\\ &=-H(t,t_{2})w(t_{2})-\int^{t}_{t_{2}}H^{\Delta_{s}}(t,s)w(\sigma(s))\Delta s, \end{aligned} \end{equation}
(14)
where \(H(t,t)=0.\) Substituting (14) into (13), we obtain \begin{equation} \begin{aligned} \int^{t}_{t_{2}}kH(t,s)v(s)q(s)\Delta s\leqslant &H(t,t_{2})w(t_{2})+\int^{t}_{t_{2}}H^{\Delta_{s}}(t,s)w(\sigma(s))\Delta s +\int^{t}_{t_{2}}H(t,s)\frac{\psi(s)}{v(\sigma(s))}w(\sigma(s))\Delta s\\ &-\int^{t}_{t_{2}}H(t,s)\frac{\alpha v(s)}{ v^{\frac{\alpha+1}{\alpha}}(\sigma(s))r^{\frac{1}{\alpha}}(s)}w^{\frac{\alpha+1}{\alpha}}(\sigma(s))\Delta s. \nonumber \end{aligned} \end{equation} Hence, \begin{equation} \begin{aligned} \int^{t}_{t_{2}}kH(t,s)v(s)q(s)\Delta s\leqslant &H(t,t_{2})w(t_{2})+\int^{t}_{t_{2}}\left(H(t,s)\frac{\psi(s)}{v(\sigma(s))}+H^{\Delta_{s}}(t,s)\right)w(\sigma(t))\Delta s\\ &-\int^{t}_{t_{2}}H(t,s)\frac{\alpha v(s)}{ v^{\frac{\alpha+1}{\alpha}}(\sigma(s))r^{\frac{1}{\alpha}}(s)}w^{\frac{\alpha+1}{\alpha}}(\sigma(t))\Delta s. \nonumber \end{aligned} \end{equation} Then, using the inequality [19] $$Bu-Cu^{\frac{1+\alpha}{\alpha}}\leqslant \frac{\alpha^{\alpha}}{(1+\alpha)^{\alpha+1}}\frac{B^{\alpha+1}}{C^{\alpha}},$$ let \(B=H(t,s)\frac{\psi(t)}{v(\sigma(t))},\) \(C=H(t,s)\frac{\alpha v(t)}{v^{\frac{\alpha+1}{\alpha}}(\sigma(t))r^{\frac{1}{\alpha}}(t)}\) and \(u=w(\sigma(t)),\) we obtain \begin{equation} \begin{aligned} \int^{t}_{t_{2}}kH(t,s)v(s)q(s)\Delta s\leqslant &H(t,t_{2})w(t_{2})+\int^{t}_{t_{2}}\frac{v^{\alpha+1}(\sigma(t))r(s) A^{\alpha+1}(t,s)}{(1+\alpha)^{1+\alpha}H^{\alpha}(t,s)v^{\alpha}(s)}\Delta s. \nonumber \end{aligned} \end{equation} Then for all \(t\geqslant t_{2}\), we have $$\int^{t}_{t_{2}}\left(kH(t,s)v(s)q(s)-\frac{v^{\alpha+1}(\sigma(t))r(s)A^{\alpha+1}(t,s)}{(1+\alpha)^{1+\alpha}H^{\alpha}(t,s)v^{\alpha}(s)}\right) \Delta s\leqslant H(t,t_{2})w(t_{2}),$$ and this implies that $$\frac{1}{H(t,t_{2})}\int^{t}_{t_{2}}\left(kH(t,s)v(s)q(s)-\frac{v^{\alpha+1}(\sigma(t))r(s)A^{\alpha+1}(t,s)}{(1+\alpha)^{1+\alpha} H^{\alpha}(t,s)v^{\alpha}(s)}\right)\Delta s\leqslant w(t_{2})$$ for all large \(t\), which contradicts (12). The proof is complete.

4. Examples

Example 4.1. Consider the equation

\begin{equation}\label{e4.1} \begin{aligned} ((x^{\Delta}(t))^{\alpha})^{\Delta}+\frac{1}{t}(x^{\Delta}(t))^{\alpha}+tx^{\alpha}(t)=0, \end{aligned} \end{equation}
(15)
where \(r(t)=1\), \(p(t)=-\frac{1}{t}\), \(q(t)=t\), \(\mu(t)=\frac{t}{2},\) \(f(x(t))=x^{\alpha}(t)\) with \(k=1\), and \(\alpha>0\). It is clear that conditions (H1)-(H4) are satisfied. Letting \(v(t)=t\), \(\mathbb{T}=[1,\infty)\), by Lemma 2.3 and Lemma 2.4 we have \begin{equation} \begin{aligned} e_{\frac{p}{r}}(t,t_{0})&=e_{-\frac{1}{t}}(t,1)=\exp\left(\int_{1}^{t}\xi_{0}(-\frac{1}{\tau})d\tau\right)\\ &=\exp\left(\int_{1}^{t}(-\frac{1}{\tau})d\tau\right) =\frac{1}{t}, \nonumber \end{aligned} \end{equation} $$e_{\ominus\frac{p}{r}}(t,t_{0})=\frac{1}{e_{\frac{p}{r}}(t,t_{0})}=t,$$ $$\int_{t_{0}}^{\infty}\left(\frac{1}{r(t)}e_{\ominus\frac{p}{r}}(t,t_{0})\right)^{\frac{1}{\alpha}}\Delta t =\int_{t_{0}}^{\infty}\left(t\right)^{\frac{1}{\alpha}}\Delta t=\infty,$$ $$\psi(t)=v^{\Delta}(t)+v(t)p(t)=0.$$ Hence, $$\underset{t\rightarrow \infty}{\lim\sup}\int^{t}_{t_{0}}\left(kv(s)q(s) -\frac{\psi^{\alpha+1}(s)r(s)}{(\alpha+1)^{\alpha+1}v^{\alpha}(s)}\right)\Delta s=\underset{t\rightarrow \infty}{\lim\sup}\int^{t}_{t_{0}}(t^{2})\Delta s=\infty.$$ That is (2) holds. By Theorem 3.1 we see that (15) is oscillatory.

5. Conclusion

The results of this article are presented in a form which is essentially new and of high degree of generalize. In this article, using generalized Riccati transformation and inequality technique, we offer some new sufficient conditions which insure that any solution of dynamic equation (1) oscillates. In addition, we can try to get some oscillation behavior of dynamic equation (1) if \(q(t)< 0\) or \(\int^{\infty}_{t_{0}}\left(\frac{1}{r(t)}e_{\ominus\frac{p(s)}{r(s)}}(t,t_{0}\right)^{\frac{1}{\alpha}}\Delta t< \infty\) in the future work.

Acknowledgments

This research is supported by Shandong Provincial Natural Science Foundation ( ZR2017MA043).

Competing Interests

The authors declare that they have no competing interests.

References

  1. Hilger, S. (1990). Analysis on measure chains—a unified approach to continuous and discrete calculus. Results in Mathematics, 18(1-2), 18-56. [Google Scholor]
  2. Bohner, M., & Peterson, A. Dynamic Equations on Time Scales: An Introduction with Applications. 2001. [Google Scholor]
  3. Bohner, M., & Peterson, A. C. (Eds.). (2002). Advances in dynamic equations on time scales. Springer Science & Business Media. [Google Scholor]
  4. Erbe, L. (2001). Oscillation criteria for second order linear equations on a time scale. Canad. Appl. Math. Quart, 9(4), 345-375. [Google Scholor]
  5. Şahı, Y. (2005). Oscillation of second-order delay differential equations on time scales. Nonlinear Analysis: Theory, Methods & Applications, 63(5-7), e1073-e1080. [Google Scholor]
  6. Saker, S. H. (2006). Oscillation of second-order nonlinear neutral delay dynamic equations on time scales. Journal of Computational and Applied Mathematics, 187(2), 123-141. [Google Scholor]
  7. Han, Z., Sun, S., & Shi, B. (2007). Oscillation criteria for a class of second-order Emden–Fowler delay dynamic equations on time scales. Journal of Mathematical Analysis and Applications, 334(2), 847-858.[Google Scholor]
  8. Hassan, T. S., Erbe, L., & Peterson, A. (2010). Oscillation of second order superlinear dynamic equations with damping on time scales. Computers & Mathematics with Applications, 59(1), 550-558.[Google Scholor]
  9. Shi, Y., Han, Z., & Hou, C. (2017). Oscillation criteria for third order neutral Emden–Fowler delay dynamic equations on time scales. Journal of Applied Mathematics and Computing, 55(1-2), 175-190.[Google Scholor]
  10. Sui, Y., & Han, Z. (2017). Oscillation of third-order nonlinear delay dynamic equation with damping term on time scales. Journal of Applied Mathematics and Computing, 1-23. [Google Scholor]
  11. Sui, Y., & Sun, S. (2018). Oscillation of third order nonlinear damped dynamic equation with mixed arguments on time scales. Advances in Difference Equations, 2018(1), 233. [Google Scholor]
  12. Saker, S. H. (2004). Oscillation of nonlinear dynamic equations on time scales. Applied Mathematics and Computation, 148(1), 81-91. [Google Scholor]
  13. Hassan, T. S. (2008). Oscillation criteria for half-linear dynamic equations on time scales. Journal of Mathematical Analysis and Applications, 345(1), 176-185. [Google Scholor]
  14. Saker, S. H., Agarwal, R. P., & O'Regan, D. (2007). Oscillation of second-order damped dynamic equations on time scales. Journal of Mathematical Analysis and Applications, 330(2), 1317-1337.[Google Scholor]
  15. Erbe, L., Hassan, T. S., & Peterson, A. (2008). Oscillation criteria for nonlinear damped dynamic equations on time scales. Applied Mathematics and Computation, 203(1), 343-357.[Google Scholor]
  16. Saker, S. H., & Cheng, S. S. (2004). Oscillation criteria for difference equations with damping terms. Applied mathematics and computation, 148(2), 421-442. [Google Scholor]
  17. Deng, X. H., Wang, Q. R., & Zhou, Z. (2015). Oscillation criteria for second order nonlinear delay dynamic equations on time scales. Applied Mathematics and Computation, 269, 834-840.[Google Scholor]
  18. Agwo, H. A., Khodier, A. M. M., & Hassan, H. A. (2017). Oscillation criteria of second order half linear delay dynamic equations on time scales. Acta Mathematicae Applicatae Sinica, English Series, 1(33), 83-92.[Google Scholor]
  19. Zhang, S. Y., & Wang, Q. R. (2010). Oscillation of second-order nonlinear neutral dynamic equations on time scales. Applied Mathematics and Computation, 216(10), 2837-2848.[Google Scholor]
]]>
\(L^p-\) boundedness for integral transforms associated with singular partial differential operators https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/lp-boundedness-for-integral-transforms-associated-with-singular-partial-differential-operators/ Tue, 30 Oct 2018 18:14:30 +0000 https://old.pisrt.org/?p=1310
OMA-Vol. 2 (2018), Issue 2, pp. 53–77 | Open Access Full-Text PDF
Lakhdar T. Rachdi, Samia Sghaier
Abstract:We define fractional transforms \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\), \(\mu>0\) on the space \(\mathbb{R}\times\mathbb{R}^n\). First, we study these transforms on regular function spaces and we establish that these operators are topological isomorphisms and we give the inverse operators as integro differential operators. Next, we study the \(L^p\)-boundedness of these operators. Namely, we give necessary and sufficient condition on the parameter \(\mu\) for which the transforms \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\) are bounded on the weighted spaces \(L^p([0,+\infty[\times\mathbb{R}^n,r^{2a}dr\otimes dx)\) and we give their norms.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

\(L^p-\) boundedness for integral transforms associated with singular partial differential operators

Lakhdar T. Rachdi\(^1\), Samia Sghaier
Université de Tunis El manar, Faculté des Sciences de Tunis, UR11ES23 Analyse géométrique et harmonique, 2092 Tunis, Tunisia.; (L.T.R & S.S)
\(^{1}\)Corresponding Author; lakhdartannech.rachdi@fst.rnu.tn

Copyright © 2018 Lakhdar T. Rachdi, Samia Sghaier. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We define fractional transforms \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\), \(\mu>0\) on the space \(\mathbb{R}\times\mathbb{R}^n\). First, we study these transforms on regular function spaces and we establish that these operators are topological isomorphisms and we give the inverse operators as integro differential operators. Next, we study the \(L^p\)-boundedness of these operators. Namely, we give necessary and sufficient condition on the parameter \(\mu\) for which the transforms \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\) are bounded on the weighted spaces \(L^p([0,+\infty[\times\mathbb{R}^n,r^{2a}dr\otimes dx)\) and we give their norms.

Keywords:

Fractional transform; Transmutation operator; Fourier transform; \(L^p\)-boundedness; Singular operator.

1. Introduction

Let \(D_j,\ 1\leq j \leq n\), and \(\Xi_\mu,\ \mu>0\), be the singular partial differential operators defined by \begin{eqnarray*}\left \{ \begin{array}{ll} D_j=\displaystyle\frac{\partial}{\partial x_j}\\ \Xi_\mu=\displaystyle(\frac{\partial}{\partial r})^2+\frac{2\mu}{r}\frac{\partial}{\partial r}+ \sum_{j=1}^n(\frac{\partial}{\partial x_j})^2 ; (r,x)\in ]0,+ \infty[\times\mathbb{R}^n, \mu>0. \end{array} \right. \end{eqnarray*} \(\Xi_\mu\) is a Bessel-Laplace operator.

When \(\mu=\frac{n-1}{2}\); \(n\in\mathbb{N}^\ast\), \(\Xi_{\frac{n-1}{2}}\) is the Laplacien operator on \(\mathbb{R}^n\times\mathbb{R}^n\) when acting on the functions \(f:\mathbb{R}^n\times\mathbb{R}^n\longrightarrow\mathbb{C}\), that are radial with respect to the first variable.

For every \((\lambda_0,\lambda)\in \mathbb{C}\times\mathbb{C}^n\), the system \begin{eqnarray*}\left \{ \begin{array}{lll}D_ju(r,x)=\displaystyle-i\lambda_ju(r,x), 1\leqslant j\leqslant n\\ \Xi_\mu u(r,x)=\displaystyle-(\lambda^2_0+\lambda^2)u(r,x)\\ \displaystyle u(0,0)=1, \frac{\partial}{\partial r}u(0,x)=0, \forall x\in\mathbb{R}^ n \end{array} \right. \end{eqnarray*} admits a unique solution given by

\begin{eqnarray}\label{1.1} \displaystyle\psi_{\lambda_0,\lambda}(r,x)=j_{\mu-\frac{1}{2}}(r\lambda_0)e^{-i\langle\lambda|x \rangle}, \end{eqnarray}
(1)
where

$$\lambda^2=\lambda_1^2+\lambda_2^2+...+\lambda^2_n,\ \lambda=(\lambda_1,\lambda_2,...,\lambda_n)$$ $$\langle\lambda|x\rangle=\lambda_1 x_1 +\lambda_2 x_2 +...+\lambda_n x_n$$ \(j_{\mu-\frac{1}{2}}\) is the modified Bessel function given by \begin{eqnarray*} % \nonumber to remove numbering (before each equation) j_{\mu-\frac{1}{2}}(s) &=& 2^{\mu-\frac{1}{2}}\Gamma(\mu+\frac{1}{2})\frac{J_{\mu-\frac{1}{2}}(s)}{s^{\mu-\frac{1}{2}}}\\ &=& \Gamma(\mu+\frac{1}{2})\sum_{k=0}^\infty\frac{(-1)^k}{k! \ \Gamma(\mu+k+\frac{1}{2})}(\frac{s}{2})^{2k}\\ &=&\frac{2\ \Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}\cos(st)dt, \end{eqnarray*} and \(J_{\mu-\frac{1}{2}}\) is the Bessel function of first kind and index \(\mu-\frac{1}{2}\) [1,2,3, 4].

The eigenfunction \(\psi_{\lambda_0,\lambda}\) allows us to define the Fourier transform \(\widetilde{\mathscr{F}}_{\mu -\frac{1}{2}}\) connected with the operators \(D_j,\ 1\leqslant j\leqslant n\) and \(\Xi_\mu\) by

\begin{eqnarray}\label{1.2} % \nonumber to remove numbering (before each equation) \nonumber \widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f)(\lambda_0\lambda) &=& \int_0^\infty\int_{\mathbb{R}^n}f(r, x)\psi_{\lambda_0, \lambda}(r, x)d\nu_\mu(r, x) \\ &=& \int_0^\infty\int_{\mathbb{R}^n}f(r, x)j_{\mu-\frac{1}{2}}(r \lambda_0)e^{-i \langle\lambda|x\rangle}d\nu_\mu(r, x), \end{eqnarray}
(2)
where \(f\) is any integrable function on \([0, +\infty[\times\mathbb{R}^n\) with respect to the measure
\begin{eqnarray}\label{1.3} d\nu_\mu(r,x) &=& \frac{r^{2 \mu}dr}{2^{\mu-\frac{1}{2}}\ \Gamma (\mu+\frac{1}{2})}\otimes \frac{d x}{(2\pi)^{\frac{n}{2}}}. \end{eqnarray}
(3)
Many harmonic analysis results related to the Fourier transform \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}\) are established [5, 6, 7, 8, 9, 10].

Also, many uncertainty principles have been cheked for this transform [11,12,13, 14].

On the other hand, the eigenfunction \(\psi_{\lambda_0,\lambda}\) admits the Poisson integral representation

\begin{eqnarray}\label{1.4} \nonumber \psi_{\lambda_0,\lambda}(r,x) &=& \frac{2\ \Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}r^{1-2\mu}\int_0^r(r^2-t^2)^{\mu-1}\cos(\lambda_{0}t)e^{-i\langle\lambda|x \rangle}dt\\ &=& \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}\cos(\lambda_{0}rt)e^{-i\langle\lambda|x \rangle}dt. \end{eqnarray}
(4)
Using the relation (4), we define the fractional transform \(\mathscr{R}_\mu\) on \(\mathscr{C}_e(\mathbb{R}\times\mathbb{R}^n)\) (the space of continuous functions on \(\mathbb{R}\times\mathbb{R}^n\), even with respect to the first variable) by
\begin{eqnarray}\label{1.5} \nonumber\mathscr{R}_\mu(f) (r,x) &=& \frac{2\ \Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}r^{1-2\mu}\int_0^r(r^2-t^2)^{\mu-1}f(t,x)dt;(r,x)\in]0, +\infty[\times\mathbb{R}^n\\ &=& \frac{2\ \Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}f(tr,x)dt;\ (r,x)\in\mathbb{R}\times\mathbb{R}^n. \end{eqnarray}
(5)
This involves in particular, that
\begin{eqnarray}\label{rel1.6} \psi_{\lambda_{0},\lambda}(r,x) &=& \mathscr{R}_\mu\big(\cos(\lambda_{0}\cdot)e^{-i\langle\lambda |\cdot \rangle}\big)(r,x), \end{eqnarray}
(6)
which gives the mutual connecion between the functions \(\psi_{\lambda_{0},\lambda}\) and \(\cos(\lambda_0 \cdot) e^{-i\langle\lambda|\cdot\rangle}.\)

On the other hand, we shall prove in the next section that for every integrable function \(f\) on \([0, +\infty[\times\mathbb{R}^n\) with respect to the measure \(d\nu_\mu(r,x)\) and for every bounded function \(g\) on \(\mathbb{R}\times\mathbb{R}^n\), even with respect to the first variable, we have the duality relation

\begin{eqnarray}\label{1.7} \int_0^\infty\int_{\mathbb{R}^n}f(r,x)\mathscr{R}_\mu(g)(r,x)d\nu_\mu(r,x) &=&\int_0^\infty\int_{\mathbb{R}^n}g(r,x)\mathscr{H}_\mu(f)(r,x)dm(r,x), \end{eqnarray}
(7)

where \(dm\) is the Lebesgue measure on \(]0, +\infty[\times\mathbb{R}^n\),

\begin{eqnarray}\label{rel1.8} dm(r,x) &=& \sqrt{\frac{2}{\pi}}dr\otimes\frac{dx}{(2\pi)^{\frac{n}{2}}}. \end{eqnarray}
(8)
\(\mathscr{H}_\mu\) is the fractional transform defined by \begin{eqnarray*} \displaystyle\mathscr{H}_\mu(f)(r,x) &=& \frac{1}{2^\mu\ \Gamma(\mu)}\int_r^\infty(t^2-r^2)^{\mu-1}f(t,x)2tdt. \end{eqnarray*} The relations (2), (6) and (7) show that for all integrable functions \(f, g\) on \([0, +\infty[\times\mathbb{R}^n\) with respect to the measure \(d\nu_\mu(r,x),\) we have
\begin{eqnarray}\label{1.9} \widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f) &=& \Lambda o\mathscr{H}_\mu(f) \end{eqnarray}
(9)
and
\begin{eqnarray}\label{1.10} \mathscr{H}_\mu(f*g) &=& \mathscr{H}_\mu(f)*_o\mathscr{H}_\mu(g), \end{eqnarray}
(10)

where \(\Lambda\) is the usual Fourier transform defined by \begin{eqnarray*} \Lambda(f)(\lambda_0, \lambda) &=& \int_0^\infty\int_{\mathbb{R}^n}f(r, x)\cos(\lambda_0 r)e^{-i\langle\lambda|x\rangle}dm(r, x), \end{eqnarray*} \(*\) is the convolution product associated with the Fourier transform \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}},\)

\(*_o\) is the usual convolution product defined by \begin{eqnarray*} f*_o g(r, x) &=& \int_0^\infty\int_{\mathbb{R}^n}f(s, y)\sigma_{r,x}(g)(s,- y)dm(s, y) \end{eqnarray*} and \(\sigma_{r, x}\) is the usual translation operator given by

\begin{eqnarray} \sigma_{r, x}(f)(s, y) &=& \frac{1}{2}\ \big(f(r+s, x+y)+f(|r-s|, x+y)\big). \end{eqnarray}
(11)
Our purpose in this work is to study the fractional transforms \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\) in two ways.

In the second section, we will prove that the operator \(\mathscr{R}_\mu\) is a topological isomorphism from \(\mathscr{E}_e\big(\mathbb{R}\times\mathbb{R}^n\big)\) (the space of infinitely differentiable functions on \(\mathbb{R}\times\mathbb{R}^n\) , even with respect to the first variable) onto itself and we give the inverse operator \(\mathscr{R}_\mu^{-1}\) as integro-differential operator .

Next, we show that the fractional transform \(\mathscr{H}_\mu\) can be extended to \(\mu\in\ \mathbb{R}\) and that for every \(\mu\in\ \mathbb{R}\ ,\ \mathscr{H}_\mu\) is a topological isomorphism from the Schwartz's space \(\mathscr{S}_e\big(\mathbb{R}\times\mathbb{R}^n\big)\) (the subspace of \(\mathscr{E}_e\big(\mathbb{R}\times\mathbb{R}^n\big)\) consisting of rapidly decreasing functions together with all their derivatives) onto itself whose inverse operator is \(\mathscr{H}_\mu^{-1}\ =\ \mathscr{H}_{- \mu} .\)

The precedent results imply in particular that \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\) are transmutation operators of \(D_j,\ 1\leq j\leq n\), and \(\Xi_\mu\) to \(D_j,\ 1\leq j\leq n\) and \(\Delta\), where \begin{eqnarray*} \Delta &=&( \frac{\partial}{\partial r})^2+ \sum_{j=1}^n(\frac{\partial}{\partial x_j})^2. \end{eqnarray*} That is, for every \(f\in \mathscr{E}_e\big(\mathbb{R}\times\mathbb{R}^n\big)\) \begin{eqnarray*} D_j\mathscr{R}_\mu(f)&=&\mathscr{R}_\mu D_j(f),\ 1\leqslant j\leqslant n \\ \Xi_\mu \mathscr{R}_\mu(f) &=& \mathscr{R}_ \mu\ \Delta( f), \end{eqnarray*} and for every \(f\in \mathscr{S}_e \big(\mathbb{R}\times\mathbb{R}^n\big)\) \begin{eqnarray*} D_j\mathscr{H}_\mu(f)&=&\mathscr{H}_\mu D_j(f),\ 1\leqslant j\leqslant n \\ \Delta \mathscr{H}_\mu(f) &=& \mathscr{H}_\mu\ \Xi_\mu(f). \end{eqnarray*}

The third section contains the main results of this paper. In fact, we study the \(L^p-\) boundedness of the operators \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\) on the weighted spaces \(L^p\big([0, +\infty[\times\mathbb{R}^n, r^{2a}dr \otimes dx\big),\ p\in\ [1, +\infty].\) We recall in this context, that studing the \(L^p-\) boundedness of integral transforms connected with differential systems is an interesting subject because knowing the range of parameters \(\mu,\ p\) for which an operator is bounded on Lebesgue space gives quantitative information about the rate of growth of the transformed functions [15,16, 17].

In this work, we give necessary and sufficient conditions on the parameters \(\mu,\ a,\ p\) for which the operator \(\mathscr{R}_\mu\) (respectively \(\mathscr{H}_\mu\)) satisfies
\begin{eqnarray}\label{1.12} ||\mathscr{R}_\mu (f)||_{p, a} &\leqslant& C_{p, a, \mu}\ ||f||_{p,a}, \end{eqnarray}
(12)
respectively
\begin{eqnarray}\label{1.13} ||\mathscr{H}_\mu(f)||_{p, a} &\leqslant& D_{p, a, \mu}\ ||r ^{2 \mu}f||_{p,a}. \end{eqnarray}
(13)
Moreover, we give the best (the smallest) contants \(C_{p, a, \mu}\) and \(D_{p, a, \mu}\) that satisfy the relations (12) and (13) .

2. Fractional transforms

2.1. The fractional transform \(\mathscr{R}_\mu\)

The space \( \mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) is equipped with the topology generated by the family of semi-norms \begin{eqnarray*} P_{m,k}(f)&=& \renewcommand{\arraystretch}{0.5} \begin{array}[t]{c} \sup \\ {\scriptstyle ||(r,x)||\leqslant m \atop |\alpha|\leqslant k}\ \end{array} \renewcommand{\arraystretch}{0.5}\big|D^\alpha (f)(r,x)\big|,\ (m,k)\in\mathbb{N}^2. \end{eqnarray*} and the distance \begin{eqnarray*} d(f,g)&=&\sum_{m,k=0}^{+\infty}(\frac{1}{2})^{m+k}\frac{P_{m,k}(f-g)}{1+P_{m,k}(f-g)}. \end{eqnarray*}

Lemma 2.1. i.. For every \(\mu>0 \), the transform \(\mathscr{R}_\mu\) is continuous from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) into itself.
ii. The operator \(\displaystyle \frac{\partial}{\partial r^2}=\frac{1}{r}\frac{\partial}{\partial r}\) is continuous from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) into itself.

Proof. i.. For every \(f\in \mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), we have \begin{eqnarray*} \mathscr{R}_\mu(f)(r,x)&=&\frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}f(tr,x)dt, \end{eqnarray*} this shows that the function \(\mathscr{R}_\mu(f)\) belongs to the space \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\). Moreover, for every \((\alpha_0,\alpha)\in \mathbb{N}\times\mathbb{N}^n\) \begin{eqnarray*} D^{(\alpha_0,\alpha)}(\mathscr{R}_\mu(f))(r,x)&=&\frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}t^{\alpha_0} D^{(\alpha_0,\alpha)}(f)(tr,x)dt, \end{eqnarray*} thus, for every \((m,k) \in \mathbb{N}^2, P_{m,k}(\mathscr{R}_\mu(f))\leqslant P_{m,k}(f).\)
ii.. For every \(f\in \mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) \begin{eqnarray*} \displaystyle\frac{\partial}{\partial r^2}(f)(r,x)&=&\int_0^1\frac{\partial^2 f}{\partial t^2}(rt,x)dt. \end{eqnarray*} Hence, the function \(\displaystyle \frac{\partial}{\partial r^{2}}(f)\) belongs to the space \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) and for every \((\alpha_0,\alpha)\in \mathbb{N}\times\mathbb{N}^n\) \begin{eqnarray*} \displaystyle D^{(\alpha_{0},\alpha)}(\frac{\partial}{\partial r^{2}}f)(r,x)&=&\int_{0}^{1}t^{\alpha_0}D^{(\alpha_0+2,\alpha)}(f)(rt,x)dt, \end{eqnarray*} so, for every \((m,k) \in \mathbb{N}^{2}, P_{m,k}\big(\frac{\partial}{\partial r^{2}}(f)\big)\leqslant P_{m,k+2}(f).\)

In the following, we shall prove that \(\mathscr{R}_\mu\) is a topological isomorphism from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself and we give the inverse operator. For this we need the notations $$r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)$$ is the space defined by $$r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)=\big\{ f:\mathbb{R}\backslash\{0\}\times\mathbb{R}^n\longrightarrow\mathbb{C},f$$ is even with respect to the first variable and \(f(r,x)=r^{2a}g(r,x),\ g\in\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\big\} \) $$r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)$$ is equipped by the family of semi-norms $$\widetilde{P}_{m,k,a}(f)=P_{m,k}(r^{-2a}f).$$ \(\widetilde{\mathscr{R}_\mu}\) is the transform defined on \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2},\) by \begin{eqnarray*} \widetilde{\mathscr{R}_\mu}(f)(r,x)&=& \frac{2r}{2^\mu\ \Gamma(\mu)}\int_0^r(r^2-t^2)^{\mu-1}f(t,x)dt,\ r>0. \end{eqnarray*}

Proposition 2.2. i. For every \(a>-\frac{1}{2}\), the operator \(\Box\) defined by \begin{eqnarray*} \Box(f)(r,x)&=&\frac{\partial}{\partial r}\big(\frac{f(r,x)}{r}\big) \end{eqnarray*} is continuous from \(r^{2(a+1)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) into \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\).
ii. The transform \(\widetilde{\mathscr{R}_\mu}\) is continuous from \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) into \(r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\).

Proof. i. Let \(f \in r^{2(a+1)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n); f(r,x)=r^{2a+2}g(r,x), g\in\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n) \) $$\Box f(r,x)=r^{2a}\big((2a+1)g(r,x)+r\frac{\partial g}{\partial r}(r,x)\big).$$ Since, the map \( :g\longrightarrow(2a+1)g+\displaystyle r\frac{\partial g}{\partial r}\) is continuous from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) into itself, then, the function \(\Box (f)\) belongs to \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n).\) Moreover, for every \((m,k)\in\mathbb{N}^2\) \begin{eqnarray*} \widetilde{P}_{m,k,a}(\Box(f)) &=& P_{m,k}\big((2a+1)g+r\frac{\partial g}{\partial r}\big) \\ &\leqslant & C P_{m',k'}(g)=C \widetilde{P}_{m',k',a+1}(f), \end{eqnarray*} where \(C\) is a constant.
ii. For every \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n), f=r^{2a}g,\ g\in \mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\ \mbox{and}\ a>-\frac{1}{2}\), the function \begin{eqnarray*} \widetilde{\mathscr{R}_\mu}(f)(r,x) &=& \frac{2r}{2^\mu\ \Gamma(\mu)}\int_0^r(r^2-t^2)^{\mu-1}t^{2a}g(t,x)dt \\ &= &\frac{2r^{2a+2\mu}}{2^\mu\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}t^{2a}g(tr,x)dt \end{eqnarray*} belongs to the space \(r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), and for every \((m,k)\in \mathbb{N}^2\) \begin{eqnarray*} \widetilde{P}_{m,k,a+\mu}(\widetilde{\mathscr{R}}_\mu(f)) &=&P_{m,k}\big(\frac{2}{2^\mu\ \Gamma(\mu)} \int_0^1(1-t^{2})^{a-1}t^{2a}g(tr,x)dt \big)\\ &\leqslant &\frac{\Gamma(a+\frac{1}{2})}{2^\mu\ \Gamma(\mu+a+\frac{1}{2})}P_{m,k}(g)\\ &= &\frac{\Gamma(a+\frac{1}{2})}{2^\mu\ \Gamma(\mu+a+\frac{1}{2})} \widetilde{P}_{m,k,a}(f). \end{eqnarray*}

Proposition 2.3. For all \(\mu,\nu > 0\) and \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2}\), we have \begin{eqnarray*} \displaystyle \widetilde{\mathscr{R}_\mu}\circ\widetilde{\mathscr{R}_\nu}(f) &=& \widetilde{\mathscr{R}}_{\mu+\nu}(f). \end{eqnarray*}

Proof. For all \(\mu\ ,\nu > 0\) and \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2},\) \begin{eqnarray*} \widetilde{\mathscr{R}_\mu}\circ\widetilde{\mathscr{R}_\nu}(f)(r,x)= \frac{2r}{2^{\mu+\nu}\ \Gamma(\mu)\ \Gamma(\nu)} \int_0^r(r^2-t^2)^{\mu-1}2t \Big(\int_0^t(t^2-s^2)^{\nu-1}f(s,x)ds\Big)dt. \end{eqnarray*} Applying Fubini's theorem we get \begin{eqnarray*} \widetilde{\mathscr{R}_\mu}\circ\widetilde{\mathscr{R}_\nu}(f)(r,x)= \frac{2r}{2^{\mu+\nu}\ \Gamma(\mu)\ \Gamma(\nu)} \int_0^rf(s,x) \Big(\int_s^r(r^2-t^2)^{\mu-1}(t^2-s^2)^{\nu-1}2tdt\Big)ds, \end{eqnarray*} however, \(\displaystyle\int_s^r(r^2-t^2)^{\mu-1}(t^2-s^2)^{\nu-1}2tdt=\displaystyle\frac{\Gamma(\mu)\ \Gamma(\nu)}{\Gamma(\mu+\nu)}(r^2-s^2)^{\mu+\nu-1}.\)
This completes the proof.

Proposition 2.4. i. For every \(\mu>1\) and \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2}\), we have \begin{eqnarray*} \displaystyle \Box\widetilde{\mathscr{R}_\mu}(f)&=&\widetilde{\mathscr{R}}_{\mu-1}(f). \end{eqnarray*} In particular, for every \(\mu>0,\ k\in\mathbb{N}\)

\begin{eqnarray}\label{rel2.1} \Box^k\widetilde{\mathscr{R}}_{\mu+k}(f)&=&\widetilde{\mathscr{R}_\mu}(f). \end{eqnarray}
(14)
ii. For every \(f\in r^{2(a+1)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2}\ \mbox{and}\ \mu>0\)
\begin{eqnarray}\label{rel2.2} \widetilde{\mathscr{R}_\mu}(\Box f)&=&\Box\widetilde{\mathscr{R}_\mu}(f). \end{eqnarray}
(15)
In particular, for every \(f\in r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2} ,\ k\in\mathbb{N}\)
\begin{eqnarray}\label{2.3} \widetilde{\mathscr{R}_\mu}(\Box^k(f))&=&\Box^k\widetilde{\mathscr{R}_\mu}(f). \end{eqnarray}
(16)

Proof. i. Let \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\) \begin{eqnarray*} \Box\widetilde{\mathscr{R}\mu}(f)(r,x)&=& \frac{\partial}{\partial r}\big(\frac{2}{2^\mu\ \Gamma(\mu)} \int_0^r(r^2-t^2)^{\mu-1}f(t,x)dt\big)\\ &=&\frac{2.2r(\mu-1)}{2^\mu\ \Gamma(\mu)} \int_0^r(r^2-t^2)^{\mu-2}f(t,x)dt\\ &=&\widetilde{\mathscr{R}}_{\mu-1}(f)(r,x), \end{eqnarray*} and by induction, we deduce that for all \(\mu>0,\ k\in\mathbb{N}\) \begin{eqnarray*} \Box^k\widetilde{\mathscr{R}}_{\mu+k}(f)&=&\widetilde{\mathscr{R}_\mu}(f). \end{eqnarray*}
ii. Let \(f\in r^{2(a+1)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), by Proposition 2.2, the function \(\Box (f)\) belongs to the space \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) and we have \begin{eqnarray*} \widetilde{\mathscr{R}_\mu}(f)(r,x)=\frac{r}{2^\mu\ \Gamma(\mu+1)}\int_0^r-\displaystyle\frac{\partial}{\partial t}\big((r^2-t^2)^\mu\big)\frac{f(t,x)}{t}dt. \end{eqnarray*} Integrating by parts, we get \begin{eqnarray*} \widetilde{\mathscr{R}_\mu}(f)(r,x)=\frac{r}{2^\mu\ \Gamma(\mu+1)}\int_0^r(r^2-t^2)^{\mu}\Box f(t,x)dt, \end{eqnarray*} so, \begin{eqnarray*} \Box\widetilde{\mathscr{R}_\mu}(f)(r,x)&=&\frac{2r}{2^\mu\ \Gamma(\mu)}\int_0^r(r^2-t^2)^{\mu-1}\Box f(t,x)dt\\ &=&\label{2.2}\widetilde{\mathscr{R}_\mu}(\Box f)(r,x). \end{eqnarray*} Now, suppose that for every \(f\in r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\) \(\Box^k\widetilde{\mathscr{R}_\mu}(f)=\widetilde{\mathscr{R}_\mu}(\Box^k f)\),
let \(g\in r^{2(a+k+1)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n).\)
Then, the function \(\Box g\) belongs to \( r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), and by hypothesis \begin{eqnarray*} \Box^k\widetilde{\mathscr{R}_\mu}(\Box g)(r,x)=\widetilde{\mathscr{R}_\mu}(\Box^{k+1} g), \end{eqnarray*} on the other hand, by relation(15) and the fact that \(\Box g\in r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\subset r^{2(a+1)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), we have \begin{eqnarray*} \Box^k\widetilde{\mathscr{R}_\mu}(\Box g)(r,x)=\Box^{k+1}\widetilde{\mathscr{R}_\mu}( g). \end{eqnarray*} The proof is complete by induction.

Theorem 2.5. For every \(k\in\mathbb{N}\backslash\{0\}\), the operator \(\widetilde{\mathscr{R}_k}\) is an isomorphism from \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto \(r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n);\ a>-\frac{1}{2}.\)
The inverse operator is given by \begin{eqnarray*} \displaystyle \widetilde{\mathscr{R}_k}^{-1}=\Box^k. \end{eqnarray*}

Proof. Let \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n).\) From Proposition 2.2, the function \(\widetilde{\mathscr{R}_k}(f)\) belongs to \(r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) and by relation(14), we have \begin{eqnarray*} \Box^k\widetilde{\mathscr{R}_k}(f)&=&\Box\Box^{k-1}\widetilde{\mathscr{R}}_{1+(k-1)}(f)\\ &=&\Box\widetilde{\mathscr{R}_1}(f)\\ &=& f. \end{eqnarray*} Let \(g \in r^{2(a+k)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\subset r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), by relation(16) \begin{eqnarray*} \widetilde{\mathscr{R}_k}(\Box^k(g))&=&\Box^k\widetilde{\mathscr{R}_k}(g)\\ &=& g. \end{eqnarray*} This achieves the proof.

Theorem 2.6. For every \(\mu\in ]0,1[\), the fractional transform \(\widetilde{\mathscr{R}_\mu}\) is an isomorphism from \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto \(r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n), a>-\frac{1}{2}.\) The inverse operator is given by $$\widetilde{\mathscr{R}_\mu}^{-1}=\Box\widetilde{\mathscr{R}}_{1-\mu}.$$

Proof. Let \(g\in r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), $$ g(r,x)=r^{2a+2\mu}h(r,x); \ h\in \mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n), $$ \begin{eqnarray*} \Box\widetilde{\mathscr{R}}_{1-\mu}(g)(r,x)&=& \frac{\partial}{\partial r}\Big(\frac{2}{2^{1-\mu}\Gamma(1-\mu)}\int_0^r(r^2-t^2)^{-\mu}t^{2a+2\mu}h(t,x)dt\Big)\\ &=& \frac{\partial}{\partial r}\Big(\frac{2r^{2a+1}}{2^{1-\mu}\Gamma(1-\mu)}\int_0^1(1-t^2)^{-\mu}t^{2a+2\mu}h(tr,x)dt\Big)\\ &=& 2(2a+1)\frac{r^{2a}}{2^{1-\mu}\ \Gamma(1-\mu)}\int_0^1(1-t^{2})^{-\mu}t^{2a+2\mu}h(tr,x)dt \\&+& 2\frac{r^{2a+1}}{2^{1-\mu}\Gamma(1-\mu)}\int_0^1(1-t^2)^{-\mu}t^{2a+2\mu+1}\frac{\partial h}{\partial t}(tr,x)dt\\ &=& 2\frac{(2a+1)}{2^{1-\mu}\Gamma(1-\mu)}\frac{1}{r}\int_0^r(r^2-t^2)^{-\mu}t^{2a+2\mu}h(t,x)dt \\&+& \frac{2}{2^{1-\mu}\ \Gamma(1-\mu)}\frac{1}{r}\int_0^r(r^2-t^2)^{-\mu}t^{2a+2\mu+1}\frac{\partial h}{\partial t}(t,x)dt. \end{eqnarray*} We deduce that \begin{eqnarray*} \widetilde{\mathscr{R}}_\mu\Big(\Box \widetilde{\mathscr{R}}_{1-\mu}(g)\Big)(r,x)&=& \frac{2(2a+1)2r}{2\Gamma(\mu)\ \Gamma(1-\mu)}\int_0^r(r^2-t^2)^{\mu-1}\frac{1}{t}\Big(\int_0^t(t^2-s^2)^{-\mu}s^{2a+2\mu}h(s,x)ds\Big)dt\\ &+& \frac{2.2r}{2\Gamma(\mu)\ \Gamma(1-\mu)}\int_0^r(r^2-t^2)^{\mu-1}\frac{1}{t}\Big(\int_0^t(t^2-s^2)^{-\mu}s^{2a+2\mu+1}\frac{\partial h}{\partial s}(s,x)ds\Big)dt\\&=&I_{1,\mu}(r,x)+I_{2,\mu}(r,x). \end{eqnarray*} From Fubini's theorem, we have $$I_{1,\mu}(r,x)= \frac{(2a+1)r}{\Gamma(\mu)\ \Gamma(1-\mu)}\int_0^rh(s,x)\Big(\int_s^r(r^2-t^2)^{\mu-1}(t^2-s^2)^{-\mu}\frac{2t}{t^2}dt\Big)s^{2a+2\mu}ds.$$ Let $$J(r,s)=\int_s^r(r^2-t^2)^{\mu-1}(t^2-s^2)^{-\mu}\frac{2t}{t^2}dt.$$ By the change of variables \(\omega=\frac{r^2-t^2}{r^2-s^2},\) we get \begin{eqnarray*} J(r,s)&=&\frac{1}{r^2}\int_0^1\frac{\omega^{\mu-1}(1-\omega)^{-\mu}}{1-\frac{r^2-s^2}{r^2}\omega}d\omega \\ &=&\frac{1}{r^2}\sum_{k=0}^\infty(\frac{r^2-s^2}{r^2})^k\int_0^1\omega^{k+\mu-1}(1-\omega)^{-\mu}d\omega \\&=&\frac{\Gamma(1-\mu)}{r^2}\sum_{k=0}^\infty\frac{\Gamma(k+\mu)}{k!}(\frac{r^2-s^2}{r^2})^k \\&=&\Gamma(\mu)\ \Gamma(1-\mu)r^{2\mu-2}s^{-2\mu}. \end{eqnarray*} So, $$I_{1,\mu}(r,x)= (2a+1)r^{2\mu-1}\int_0^r h(s,x)s^{2a} ds$$ As the same way, \begin{eqnarray*} I_{2,\mu}(r,x)&=& \frac{r}{\Gamma(\mu)\ \Gamma(1-\mu)}\int_0^r\displaystyle\frac{\partial h}{\partial s}(s,x)\big(\int_s^r(r^2-t^2)^{\mu-1}(t^2-s^2)^{-\mu}\frac{2t}{t^2}dt\big)s^{2a+2\mu+1}ds\\ &=&r^{2\mu-1}\int_0^r\frac{\partial h}{\partial s}(s,x)s^{2a+1} ds. \end{eqnarray*} Consequently, \begin{eqnarray*} \widetilde{\mathscr{R}}_\mu\big(\Box \widetilde{\mathscr{R}}_{1-\mu}(g)\big)(r,x)&=& r^{2\mu-1}\int_0^r\Big((2a+1)s^{2a}h(s,x)+s^{2a+1}\frac{\partial h}{\partial s}(s,x)\Big)ds\\ &=&r^{2\mu-1}\int_0^r\frac{\partial }{\partial s}\big(s^{2a+1}h(s,x)\big) ds\\&=&r^{2a+2\mu}h(r,x),\hbox{ because}\ a>-\frac{1}{2}\\&=&g(r,x). \end{eqnarray*} On the other hand, from Proposition 2.3 and for every \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), \begin{eqnarray*} \Box\widetilde{\mathscr{R}}_{1-\mu}\widetilde{\mathscr{R}}_\mu(f)&=& \Box\widetilde{\mathscr{R}}_1(f)\\&=&f. \end{eqnarray*} This completes the proof.

Lemma 2.7. Let \(\mu\in\mathbb{R},\ \mu\geqslant0.\) For every \(k_1,\ k_2 \in \mathbb{N}\backslash \{0\},\ k_1-\mu>0,\ k_2-\mu>0\) and for every \(f\in r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), we have $$ \Box^{k_1}\widetilde{\mathscr{R}}_{k_1-\mu}(f)= \Box^{k_2}\widetilde{\mathscr{R}}_{k_2-\mu}(f) .$$

Proof. Let \(k_1, \ k_2 \in \mathbb{N}\backslash \{0\},\ k_1-\mu>0,\ k_2-\mu>0\), and $k_1< k_2 ,$ $$\Box^{k_2}\widetilde{\mathscr{R}}_{k_2-\mu}(f)=\Box^{k_1}\Box^{k_2-k_1}\widetilde{\mathscr{R}}_{k_2-k_1+(k_1-\mu)}(f), $$ applying relation (14), we get $$\Box^{k_2}\widetilde{\mathscr{R}}_{k_2-\mu}(f)= \Box^{k_1}\widetilde{\mathscr{R}}_{k_1-\mu}(f).$$

The previous Lemma allows us to define the fractional transform \(\widetilde{\mathscr{R}_\mu}\) for every \(\mu\in\mathbb{R}.\)

Definition 2.8. For every \(\mu\in\mathbb{R},\ \mu\geqslant0\), the fractional transform \(\widetilde{\mathscr{R}_{-\mu}}\) is defined on \(r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) by $$\widetilde{\mathscr{R}_{-\mu}}(f)=\Box^k\widetilde{\mathscr{R}}_{k-\mu}(f),$$ where \( k \in \mathbb{N}\backslash \{0\},\ k-\mu>0.\)
In particular, for \(f \in r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) $$\widetilde{\mathscr{R}_{-\mu}}(f)=\Box^{E(\mu)+1}\widetilde{\mathscr{R}}_{E(\mu)+1-\mu}(f),$$ where \(E(\mu)\) is the entire party of \(\mu .\)

Remark 2.9. According to Definition 2.8 and for every \(f \in r^{2a}\mathscr{E}_{e}(\mathbb{R}\times\mathbb{R}^{n}), \ a>-\frac{1}{2}\), we have $$\widetilde{\mathscr{R}_0}(f)=\Box\widetilde{\mathscr{R}_1}(f)=f,$$ that is $$\widetilde{\mathscr{R}_0}=Id_{ r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)}.$$

Theorem 2.10. For \(\mu>0\), the fractional transform \(\widetilde{\mathscr{R}_\mu}\) is a topological isomorphism from \(r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto \(r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\ a>-\frac{1}{2}.\)
The inverse operator is given by $$\widetilde{\mathscr{R}_\mu}^{-1}=\widetilde{\mathscr{R}_{-\mu}}.$$

Proof. For \(\mu\in\mathbb{N}\), the result follows from Theorem 2.5 and Remark 2.9.
Let \(\mu\in]0,+\infty[\backslash\mathbb{N}\), for every \(f\in r^{2a}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) and from Proposition 2.3 and Theorem 2.5, we have \begin{eqnarray*} \widetilde{\mathscr{R}_{-\mu}}\big(\widetilde{\mathscr{R}_\mu}(f)\big)&=& \Box^{E(\mu)+1}\widetilde{\mathscr{R}}_{E(\mu)+1-\mu}\big(\widetilde{\mathscr{R}}_\mu(f)\big)\\ &=& \Box^{E(\mu)+1}\widetilde{\mathscr{R}}_{E(\mu)+1}(f)\\ &=&f. \end{eqnarray*} Conversely, for every \(g\in r^{2(a+\mu)}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\) $$\widetilde{\mathscr{R}_\mu}\circ\widetilde{\mathscr{R}_{-\mu}}(g)=\widetilde{\mathscr{R}_\mu}\Box^{E(\mu)+1}\widetilde{\mathscr{R}}_{E(\mu)+1-\mu}(g),$$ let \(\nu=\mu-E(\mu)\), then \(\nu\in]0,1[\), and $$\widetilde{\mathscr{R}_\mu}\circ\widetilde{\mathscr{R}_{-\mu}}(g)=\widetilde{\mathscr{R}_\nu}\widetilde{\mathscr{R}}_{E(\mu)} \Box^{E(\mu)}\Box\widetilde{\mathscr{R}}_{1-\nu}(g). $$ Since, \(\Box\widetilde{\mathscr{R}}_{1-\nu}(g)\) belongs to \(r^{2(a+E(\mu))}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n),\) then, Theorem 2.5 involves that $$\widetilde{\mathscr{R}}_\mu\circ\widetilde{\mathscr{R}}_{-\mu}(g)=\widetilde{\mathscr{R}}_\nu \Box\widetilde{\mathscr{R}}_{1-\nu}(g).$$ The result follows from Theorem 2.6.

Now, we have the following important result.

Theorem 2.11. For every \(\mu>0\), the fractional transform \(\mathscr{R}_\mu\) defined by relation (5) is a topological isomorphism from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself.

Proof. For every \(f\in\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\), $$\mathscr{R}_\mu(r,x)=\frac{2^\mu\ \Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}}r^{-2\mu}\widetilde{\mathscr{R}_\mu}(f)(r,x).$$ From Theorem 2.10, the transform \(\widetilde{\mathscr{R}_\mu}\) is a topological isomorphism from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto \(r^{2\mu}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\). On the other hand, the map $$f\longrightarrow r^{-2\mu}f$$ is a topological isomorphism from \(r^{2\mu}\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) .
Consequently, \(\mathscr{R}_\mu\) is a topological isomorphism from \(\mathscr{E}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself.
Moreover, \begin{eqnarray*} \mathscr{R}_\mu^{-1}(f)(r,x)&=& \frac{\sqrt{\pi}}{2^{\mu}\ \Gamma(\mu+\frac{1}{2})}\widetilde{\mathscr{R}_{-\mu}}\big(r^{2\mu}f)(r,x\big)\\ &=& \frac{\sqrt{\pi}}{2^\mu\ \Gamma(\mu+\frac{1}{2})}\Box^{E(\mu)+1}\widetilde{\mathscr{R}}_{E(\mu)+1-\mu}\big(r^{2\mu}f\big)(r,x). \end{eqnarray*}

2.2. The fractional transform \(\mathscr{H}_\mu\)

We recall that the space \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) is equipped with the topology generated by the family of norms $$N_m(f)= \renewcommand{\arraystretch}{0.5} \begin{array}[t]{c} \max \\ {\scriptstyle (r,x)\in \mathbb{R}\times\mathbb{R}^n \atop k+|\alpha|\leqslant m}\ \end{array} \renewcommand{\arraystretch}{0.5}(1+r^2+|x|^2)^k |D^\alpha(f)(r, x)|, \ m\in\mathbb{N}.$$ By a standard argument, for every \(f\in \mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\), the function \(\displaystyle\frac{\partial}{\partial r^2}(f)\) belongs to \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) and for every \(m\in \mathbb{N},\) $$N_m\big(\frac{\partial}{\partial r^2}(f)\big)\leqslant2^{m+1}N_{m+3}(f).$$ This shows that the operator \(\displaystyle\frac{\partial}{\partial r^2}\) is continuous from \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\)into itself and consequently the operator \(\Xi_\mu\) is also continuous from \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) into itself.
On the other hand, for every \(f \in \mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) and for every \(k\in \mathbb{N},\) we have
\begin{eqnarray}\label{2.4} (1+\lambda_0^2+|\lambda|^2)^k\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f)(\lambda_0,\lambda)&=& \widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}\big((I-\Xi_\mu)^k(f)\big)(\lambda_0, \lambda). \end{eqnarray}
(17)
Where \(I\) is the identity operator.
Using the relation (7) and the inversion formula for \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}\) that is for every \(f\in L^1(d\nu_\mu)\) such that \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f)\) belongs to \(L^1(d\nu_\mu)\) , we have $$f=\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}o\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(\check{f})\ \mbox{a.e} ,$$ we deduce that the transform \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}\) is a topolgical isomorphism from \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself and $$\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}^{-1}(f)= \widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(\check{f})$$ where \(\check{f}(r, x)\ =\ f(r, -x).\)

Lemma 2.12. For every \(f\in L^1(d\nu_\mu)\) and \(\mu>0\), the function $$ \mathscr{H}_\mu(f)(t,x)=\frac{1}{2^\mu\ \Gamma(\mu)}\int_t^\infty(r^2-t^2)^{\mu-1}f(r,x)2rdr,$$ is defined almost every where, belongs to \(L^1(dm)\), where \(dm\) is the Lebesgue measure given by relation (8), and we have $$ ||\mathscr{H}_\mu(f)||_{1,m}\leqslant||f||_{1,\nu_\mu}.$$

Proof. By Fubini-Tonnelli Theorem's, we have \begin{eqnarray*} % \nonumber to remove numbering (before each equation) \int_0^\infty\int_{\mathbb{R}^n}|\mathscr{H}_\mu(f)(t,x)|dm(t,x)&\leqslant& \sqrt{\frac{2}{\pi}} \frac{1}{2^\mu\ \Gamma(\mu)(2\pi)^{\frac{n}{2}}}\int_0^\infty\int_{\mathbb{R}^n}\Big(\int_t^\infty(r^2-t^2)^{\mu-1}|f(r,x)|2rdr\Big)dtdx\\&=& \sqrt{\frac{2}{\pi}} \frac{1}{2^\mu\ \Gamma(\mu)(2\pi)^{\frac{n}{2}}}\int_0^\infty\int_{\mathbb{R}^n}|f(r,x)|\Big(\int_0^r(r^2-t^2)^{\mu-1}dt\Big)2rdrdx\\&=& \frac{1}{2^{\mu-\frac{1}{2}}\Gamma(\mu+\frac{1}{2})(2\pi)^{\frac{n}{2}}}\int_0^\infty \int_{\mathbb{R}^n}|f(r,x)|r^{2\mu}drdx\\&=&\|f\|_{1,\nu_\mu}. \end{eqnarray*}

Proposition 2.13. i. For every \(f\in L^1(d\nu_\mu)\) and every bounded measurable function \(g\) on \([0,+\infty[\times\mathbb{R}^n\), we have the duality relation $$ \int_0^\infty\int_{\mathbb{R}^n}f(r,x)\mathscr{R}_\mu(g)(r,x)d\nu_\mu(r,x)= \int_0^\infty\int_{\mathbb{R}^n}\mathscr{H}_\mu(f)(r,x)g(r,x)dm(r,x). $$ ii. For every \(f\in L^1(d\nu_\mu)\)

\begin{eqnarray}\label{2.5} \widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f) &=& \Lambda\circ\mathscr{H}_\mu(f), \end{eqnarray}
(18)
where, \(\Lambda\) is the usual Fourier transform defined on \(L^1(dm)\) by $$\Lambda(f)(\lambda_0,\lambda)= \int_0^\infty\int_{\mathbb{R}^n}f(r,x)\cos(r\lambda_0)e^{-i\langle \lambda | x\rangle}dm(r,x).$$

Proof. i. It is clear that for every bounded function \(g\) on \([0,+\infty[\times\mathbb{R}^n\), the function \(\mathscr{R}_\mu(g)\) is also bounded on \([0,+\infty[\times\mathbb{R}^n\).
Consequently, the integral \(\displaystyle\int_0^\infty\int_{\mathbb{R}^n}f(r,x)\mathscr{R}_\mu(g)(r,x)d\nu_\mu(r,x)\) is well defined, and we have \begin{eqnarray*} \int_0^\infty\int_{\mathbb{R}^n}f(r,x)\mathscr{R}_\mu(g)(r,x)d\nu_\mu(r,x)&=& \int_0^\infty\int_{\mathbb{R}^n}f(r,x)\frac{2r}{2^{\mu-\frac{1}{2}}\sqrt{\pi} \ (2\pi)^{\frac{n}{2}}\Gamma(\mu)} \\&\times &\Big(\int_0^r(r^2-t^2)^{\mu-1}g(t,x)dt\Big)drdx. \end{eqnarray*} By Fubini's Theorem, \begin{eqnarray*} \int_0^\infty\int_{\mathbb{R}^n}f(r,x)\mathscr{R}_\mu(g)(r,x)d\nu_\mu(r,x)&=& \int_0^\infty\int_{\mathbb{R}^n}g(t,x)\big(\frac{1}{2^\mu\ \Gamma(\mu)}\int_t^\infty(r^2-t^2)^{\mu-1}f(r,x)2rdr\big)\\& \times&\sqrt{\frac{2}{\pi}} dt\frac{dx}{(2\pi)^{\frac{n}{2}}}\\&=& \int_0^\infty\int_{\mathbb{R}^n}g(t,x)\mathscr{H}_\mu(f)(t,x)dm(t,x). \end{eqnarray*} ii. Let \(f\in L^1(d\nu_\mu)\), we have $$\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f)(\lambda_0,\lambda)=\int_0^\infty\int_{\mathbb{R}^n}f(r,x)\Psi_{\lambda_0,\lambda}(r,x)d\nu_\mu(r,x)$$ and by the relation (6), $$\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f)(\lambda_0,\lambda)=\int_0^\infty\int_{\mathbb{R}^n}f(r,x)\mathscr{R}_\mu\big(\cos(\lambda_0.) e^{-i\langle\lambda|.\rangle}\big)(r,x)d\nu_\mu(r,x),$$ and by the relation of duality, Proposition 2.13, we obtain \begin{eqnarray*} % \nonumber to remove numbering (before each equation) \widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}(f)(\lambda_0,\lambda)&=&\int_0^\infty\int_{\mathbb{R}^n}\mathscr{H}_\mu(f)(r,x)\cos(\lambda_0r) e^{-i\langle\lambda|x\rangle} dm(r,x)\\&=&\Lambda\circ\mathscr{H}_\mu(f)(\lambda_0,\lambda). \end{eqnarray*}

Corollary 2.14. For every \(\mu >0\), the fractional transform \(\mathscr{H}_\mu\) is a topological isomorphism from \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself.

Proof. Since the Fourier transforms \(\Lambda\) and \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}\) are topological isomorphisms from \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself, the result follows from the relation (18).

Next, we will prove that the fractional transform \(\mathscr{H}_\mu\) can be extended to \(\mu\in \mathbb{R}\) and we give the inverse operator \(\mathscr{H}_\mu^{-1}.\)

Proposition 2.15. For every \(\mu,\ \nu >0\) and \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n),\) \begin{eqnarray*} \mathscr{H}_\mu\circ\mathscr{H}_\nu(f)=\mathscr{H}_{\mu+\nu}(f). \end{eqnarray*}

Proof. Let \(\mu,\ \nu >0\) and \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) $$\mathscr{H}_\mu\circ\mathscr{H}_\nu(f)(r,x)=\frac{1}{2^{\mu+\nu}\ \Gamma(\mu)\Gamma(\nu)}\int_r^\infty(t^2-r^2)^{\mu-1} \big(\int_t^{+\infty}(s^2-t^2)^{\nu-1}f(s,x)2sds\big)2tdt.$$ Applying Fubini's Theorem we get $$\mathscr{H}_\mu\circ\mathscr{H}_\nu(f)(r,x)=\frac{1}{2^{\mu+\nu}\ \Gamma(\mu)\Gamma(\nu)}\int_r^\infty f(s,x) \big(\int_r^s(s^2-t^2)^{\nu-1}(t^2-r^2)^{\mu-1}2tdt\big)2sds,$$ however,$$\int_r^s(s^2-t^2)^{\nu-1}(t^2-r^2)^{\mu-1}2tdt=\frac{\Gamma(\mu)\ \Gamma(\nu)}{\Gamma(\mu+\nu)}(s^2-r^2)^{\mu+\nu-1},$$ this completes the proof.

Proposition 2.16. i. For every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) and \(\mu>0\), we have

\begin{eqnarray}\label{2.6} \displaystyle \frac{\partial}{\partial t^2}\mathscr{H}_\mu(f)=\mathscr{H}_\mu(\frac{\partial}{\partial t^2}f). \end{eqnarray}
(19)

ii. For every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) and \(\mu>0\), we have
\begin{eqnarray}\label{2.7} -\mathscr{H}_{\mu+1}(\displaystyle\frac{\partial}{\partial t^2}f)=\mathscr{H}_\mu(f). \end{eqnarray}
(20)

Proof. i. Integrating by parts, we get for every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\), $$\mathscr{H}_\mu(f)(t,x)=-\frac{1}{2^\mu\ \Gamma(\mu+1)}\int_t^\infty(r^2-t^2)^\mu\displaystyle\frac{\partial f}{\partial r}(r,x)dr.$$ Hence, \begin{eqnarray*} \displaystyle \frac{\partial}{\partial t^2}\mathscr{H}_\mu(f)(t,x)&=& \frac{1}{2^\mu\ \Gamma(\mu)}\int_t^\infty(r^2-t^2)^{\mu-1}\frac{\partial f}{\partial r^2}(r,x)2rdr\\&=&\mathscr{H}_\mu(\frac{\partial}{\partial r^2}f)(t,x). \end{eqnarray*} ii. For every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n),\ \mu>0\), and from relation (19), $$\frac{\partial}{\partial t^2}\mathscr{H}_{\mu+1}(f)=\mathscr{H}_{\mu+1}(\frac{\partial}{\partial t^2}f).$$ So, for every \((t,x)\in\mathbb{R}\times\mathbb{R}^n\), \begin{eqnarray*} \mathscr{H}_{\mu+1}(\frac{\partial}{\partial t^2}f)(t,x)&=& \frac{\partial}{\partial t^2}\Big(\frac{1}{2^{\mu+1}\ \Gamma(\mu+1)}\int_t^\infty(r^2-t^2)^\mu f(r,x)2rdr\Big)\\&=&-\mathscr{H}_\mu(f)(t,x). \end{eqnarray*}

Corollary 2.17. Let \(\mu\) be a real number. For all \(k_1,\ k_2\in \mathbb{N},\ k_1+\mu>0,\ k_2+\mu>0\) and for every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\), we have $$\displaystyle (-1)^{k_1}\mathscr{H}_{\mu+k_1}\Big((\frac{\partial}{\partial t^2})^{k_1}f\Big)= (-1)^{k_2}\mathscr{H}_{\mu+k_2}\Big((\frac{\partial}{\partial t^2})^{k_2}f\Big).$$

Proof. Let \(k_1,\ k_2 \in \mathbb{N},\ k_1 < k_2,\ k_1 +\mu>0\) and \(k_2+\mu>0\). From Proposition 2.16, it follows that for every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\), \begin{eqnarray*} (-1)^{k_2}\mathscr{H}_{\mu+k_2}((\frac{\partial}{\partial t^2})^{k_2}f)&=& (-1)^{k_1}(-1)^{k_2-k_1}\mathscr{H}_{\mu+k_1+(k_2-k_1)}\Big((\frac{\partial}{\partial t^2})^{k_2-k_1} (\frac{\partial}{\partial t^2})^{k_1}(f)\Big)\\&=&(-1)^{k_1}\mathscr{H}_{\mu+k_1}((\frac{\partial}{\partial t^2})^{k_1}f). \end{eqnarray*}

Definition 2.18. For every \(\mu\in\mathbb{R}\), the fractional transform \(\mathscr{H}_\mu\) is defined on \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) by $$\mathscr{H}_\mu(f)=(-1)^k\mathscr{H}_{\mu+k}((\frac{\partial}{\partial t^2})^kf)=(-1)^k(\frac{\partial}{\partial t^{2}})^k\mathscr{H}_{\mu+k}(f),$$ where \(k\in \mathbb{N},\ k+\mu>0\).

We have the following properties,
From Corollary 2.17, the expression \(\mathscr{H}_\mu\) in Definition 2.18 is independent of the choice of \(k\in \mathbb{N},\ k+\mu>0\).
For every \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\),
\begin{eqnarray} \nonumber\mathscr{H}_0(f)(t,x)&=&-\frac{\partial}{\partial t^2}\mathscr{H}_1(f)(t,x)\\ &=&\label{rel2.8} -\frac{1}{t}\frac{\partial}{\partial t}\big(\int_t^\infty f(r,x)rdr\big)=f(t,x). \end{eqnarray}
(21)

Proposition 2.19. i. For every \(\mu,\ \nu \in\mathbb{R}\) and \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n).\)

\begin{eqnarray}\label{2.9} \mathscr{H}_\mu\circ\mathscr{H}_\nu(f)=\mathscr{H}_{\mu+\nu}(f) \end{eqnarray}
(22)

ii. For every \(\mu \in\mathbb{R}\), the fractional transform \(\mathscr{H}_\mu\) is a topological isomorphism from \(\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n)\) onto itself whose inverse isomorphism is $$\mathscr{H}_\mu^{-1}=\mathscr{H}_{-\mu}.$$

Proof. i. Let \(\mu,\ \nu \in\mathbb{R},\ k_1,\ k_2\in \mathbb{N},\ k_1+\mu>0,\ k_2+\mu>0\) and \(f\in\mathscr{S}_e(\mathbb{R}\times\mathbb{R}^n),\) we have \begin{eqnarray*} \mathscr{H}_\mu \circ\mathscr{H}_\nu(f)&=&\mathscr{H}_\mu \Big((-1)^{k_2}(\frac{\partial}{\partial t^2})^{k_2}\mathscr{H}_{\nu+k_2}(f)\Big)\\&=& (-1)^{k_1+k_2}\mathscr{H}_{\mu+k_1}\Big((\frac{\partial}{\partial t^2})^{k_1}\mathscr{H}_{\nu+k_2}\big((\frac{\partial}{\partial t^2})^{k_2}(f)\big)\Big)\\&=&(-1)^{k_1+k_2}\mathscr{H}_{\mu+k_1}\circ\mathscr{H}_{\nu+k_2}\big((\frac{\partial}{\partial t^2})^{k_1+k_2}(f) \big). \end{eqnarray*} Now, from Proposition 2.15, we deduce that \begin{eqnarray*} \mathscr{H}_\mu\circ\mathscr{H}_\nu(f)&=&(-1)^{k_1+k_2}\mathscr{H}_{\mu+\nu+k_2+k_1}\Big((\frac{\partial}{\partial t^2})^{k_1+k_2}(f)\Big)\\&= &\mathscr{H}_{\mu+\nu}(f), \end{eqnarray*} because \(\mu+\nu+k_1+k_2>0.\)
ii. The result follows from relations (21) and (22).

3. \(L^{p}\)-boundedness of the fractional transform \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\)

This section contains the main results of this work. In fact, we study the boundedness of the operators \(\mathscr{R}_\mu\) and \(\mathscr{H}_\mu\) on the the weighted Lebesgue spaces \(L^p\big([0,+\infty[\times\mathbb{R}^n,r^{2a}drdx\big), p\in[1,+\infty[\) equipped with the norm $$\displaystyle||f||_{p,a}=\left\{ \begin{array}{ll} \displaystyle \Big(\int_0^\infty\int_{\mathbb{R}^n}|f(r,x)|^{p}r^{2a}drdx\Big)^{\frac{1}{p}},\ \hbox{ if} \ 1\leqslant p\leqslant +\infty\\ \renewcommand{\arraystretch}{0.5} \begin{array}[t]{c} \hbox{ess sup} \\ {\scriptstyle (r,x)\in\ [0,+\infty[\times\mathbb{R}^n } \end{array}\big|f(r,x)\big| ,\ \hbox{if}\ p=+\infty. \end{array} \right.$$ For convenience we refer to this space as \(L^p(d\gamma_a )\) with \(d\gamma_a(r,x)=r^{2a}drdx\).

3.1. \(L^p\)-boundedness of the fractional transform \(\mathscr{R}_\mu\)

Proposition 3.1. For every \(a\in\mathbb{R}\) and every \(\mu>0\), the fractional transform \(\mathscr{R}_\mu\) is bounded from \(L^\infty(d\gamma_a)\) into itself and $$||\mathscr{R}_\mu||_{\infty,\gamma_a}=\sup_{||f||_{\infty,a}\leqslant1}||\mathscr{R}_\mu(f)||_{\infty,a}=1.$$

Proof. Let \(f\) be a bounded measurable function on \([0,+\infty[\times\mathbb{R}^n\). For every \((r,x)\in[0,+\infty[\times\mathbb{R}^n,\) \begin{eqnarray*} |\mathscr{R}_\mu(f)(r,x)|&\leqslant& \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}|f(tr,x)|dt \\&\leqslant & ||f||_{\infty,a} \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}dt\\&=&||f||_{\infty,a}. \end{eqnarray*} This shows that the operator \(\mathscr{R}_\mu\) is bounded from \(L^\infty(d\gamma_a)\) into itself and that $$||\mathscr{R}_\mu||_{\infty,\gamma_a}\leqslant1.$$ However, \(\mathscr{R}_\mu(1)=1\), this shows that $$||\mathscr{R}_\mu||_{\infty,\gamma_a}=1.$$

Theorem 3.2. The operator \(\mathscr{R}_\mu; \mu>0\) is bounded from \(L^1(d\gamma_a)\) into itself if and only if \(a< 0\) and in this case $$||\mathscr{R}_\mu||_{1,\gamma_a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(-a)}{\sqrt{\pi}\ \Gamma(\mu-a)}.$$

Proof. Let \(a \in \mathbb{R},\ a< 0 \). By Fubini-Tonnelli Theorem's and for every \(f\in L^1(d\gamma_a)\), \begin{eqnarray*} \int_0^\infty\int_{\mathbb{R}^n}|\mathscr{R}_\mu(f)(r,x)|d\gamma_a(r,x)&\leqslant & \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^\infty\int_{\mathbb{R}^n}\big(\int_0^1(1-t^2)^{\mu-1} |f(tr,x)|dt\big)d\gamma_a(r,x)\\&=&\frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi} \ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1} \big(\int_0^\infty\int_{\mathbb{R}^n}|f(tr,x)|d\gamma_a(r,x)\big)dt \\&= & ||f||_{1,a} \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1}t^{-(2a+1)}dt \\&=&\frac{\Gamma(\mu+\frac{1}{2})\Gamma(-a)}{\sqrt{\pi} \ \Gamma(\mu-a)}||f||_{1,a}. \end{eqnarray*} Consequently for \(a< 0\), the transform \(\mathscr{R}_\mu\) is a bounded operator from \(L^1(d\gamma_a)\) into itself and $$||\mathscr{R}_\mu||_{1,\gamma_a}\leqslant\frac{\Gamma(\mu+\frac{1}{2})\Gamma(-a)}{\sqrt{\pi} \ \Gamma(\mu-a)}. $$ On the other hand, for every nonnegative \(f\in L^1(d\gamma_a)\), we have $$ ||\mathscr{R}_\mu(f)||_{1,a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(-a)}{\sqrt{\pi} \ \Gamma(\mu-a)}||f||_{1,a} $$ We conclude that $$ ||\mathscr{R}_\mu||_{1,\gamma_a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(-a)}{\sqrt{\pi} \ \Gamma(\mu-a)}.$$ For converse, let \(a \in \mathbb{R},\ a\geqslant0 \) and let \(f\in L^1(d\gamma_a)\) be a nonnegative function such that \(||f||_{1,a} =1\). We have $$ ||\mathscr{R}_\mu(f)||_{1,\gamma_a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(-a)}{\sqrt{\pi} \ \Gamma(\mu-a)}=+\infty.$$ This completes the proof.

Theorem 3.3. Let \(p\in ]1,+\infty[\). The operator \(\mathscr{R}_\mu,\ \mu>0\), is bounded from \(L^p(d\gamma_a)\) into itself if and only if \(2a+1< p\) and in this case $$ ||\mathscr{R}_\mu||_{p,\gamma_a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{p-(2a+1)}{2p})}. $$

Proof. Let \(p\in \ ]1,+\infty[,\ 2a+1< p\). From Minkowski's inequality [18] and for every \(f\in L^p(d\gamma_a)\), \begin{eqnarray*} ||\mathscr{R}_\mu(f)||_{p,a}&\leqslant& \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi} \ \Gamma(\mu)}\int_0^1(1-t^2)^{\mu-1} \Big(\int_0^\infty\int_{\mathbb{R}^n}|f(tr,x)|^{p}d\gamma_a(r,x)\Big)^{\frac{1}{p}}dt \\&= & \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi} \ \Gamma(\mu)}\|f\|_{p,a}\int_0^1(1-t^2)^{\mu-1}t^{-\frac{2a+1}{p}}dt \\&=&\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{p-(2a+1)}{2p})} ||f||_{p,a}. \end{eqnarray*} This proves that for \(2a+1< p\), the fractional transform \(\mathscr{R}_\mu\) is bounded from \(L^p(d\gamma_a)\) into itself and

\begin{eqnarray}\label{rel3.1} % \nonumber to remove numbering (before each equation) ||\mathscr{R}_\mu||_{p,\gamma_a}&\leqslant &\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{p-(2a+1)}{2p})}. \end{eqnarray}
(23)
Let \(\eta>0\) and let $$f_0(r,x) =r^{\frac{\eta-(2a+1)}{p}}\textbf{1}_{]0,1[}(r)\Pi_{j=1}^n\textbf{1}_{]0,1[}(x_j),$$ then \(f_0\) belongs to \(L^p(d\gamma_a)\) and $$ ||f_0||_{p,a}=(\frac{1}{\eta})^{\frac{1}{p}}.$$ On the other hand, \begin{eqnarray*} |\mathscr{R}_\mu(f_0)(r,x)|&\geqslant& \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi} \ \Gamma(\mu)}r^{1-2\mu}\Big(\int_0^r (r^2-t^2)^{\mu-1}t^{\frac{\eta-(2a+1)}{p}}dt\Big) \textbf{1}_{]0,1[}(r)\Pi_{j=1}^n \textbf{1}_{]0,1[}(x_j) \\&= & \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi} \ \Gamma(\mu)}f_0(r,x)\int_0^1 (1-t^2)^{\mu-1}t^{\frac{\eta-(2a+1)}{p}}dt \\&=&\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{1}{2}+\frac{\eta-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{1}{2}+\frac{\eta-(2a+1)}{2p})} f_0(r,x). \end{eqnarray*} Integrating over \(]0,+\infty[\times\mathbb{R}^n\) with respect to the measure \(d\gamma_a\), we deduce that for every \(\eta>0,\) $$ ||\mathscr{R}_\mu||_{p,\gamma_a}\geq \frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{1}{2}+\frac{\eta-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{1}{2}+\frac{\eta-(2a+1)}{2p})}. $$ This involves that
\begin{eqnarray}\label{rel3.2} ||\mathscr{R}_\mu||_{p,\gamma_a}&\geq&\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{p-(2a+1)}{2p})}. \end{eqnarray}
(24)
The relations (23) and (24) imply that for every \(a,\ 2a+1< p\) $$ ||\mathscr{R}_\mu||_{p,\gamma_a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2a+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{p-(2a+1)}{2p})}.$$ Now, we prove that, for \(2a+1>p\), \(\mathscr{R}_\mu\) does not map \(L^p(d\gamma_a)\) into itself. To prove this we have following two cases:
Case 1. Suppose that \(2a+1=p\) and let $$ g_0(r,x)=\frac{1}{r(1-\ln(r))}\textbf{1}_{]0,1[}(r)\Pi_{j=1}^n\textbf{1}_{]0,1[}(x_j),$$ then, \(g_0\) belongs to \(L^p(d\gamma_a)\) and we have $$ ||g_0||_{p,a}^p=\int_0^1\frac{dr}{r(1-\ln(r))^p}=\int_{-\infty}^0\frac{ds}{(1-s)^p}=\frac{1}{p-1}.$$ However, for every \((r,x)\in ]0,1[\times]0,1[^n,\) $$ \mathscr{R}_\mu(g_0)(r,x)= \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\Gamma(\mu)}r^{1-2\mu}\int_0^r(r^2-t^2)^{\mu-1}\frac{dt}{t(1-\ln(t))}=+\infty,$$ in particular \(\mathscr{R}_\mu(g_0)\) does not belong to \(L^p(d\gamma_a).\)
Case 2. Suppose that \(2a+1>p\) and let \(\eta\in\mathbb{R}; -\frac{2a+1}{p}< \eta < -1\) and let $$ h_0(r,x)=r^\eta\textbf{1}_{]0,1[}(r)\Pi_{j=1}^n\textbf{1}_{]0,1[}(x_{j}).$$ Then the function \(h_0\) lies in \(L^p(d\gamma_a)\) and $$ ||h_0||_{p,a}^p=\frac{1}{p\eta+2a+1}.$$ But, for every \((r,x)\in ]0,1[\times]0,1[^n,\) $$ \mathscr{R}_\mu(h_0)(r,x)= \frac{2\Gamma(\mu+\frac{1}{2})}{\sqrt{\pi}\ \Gamma(\mu)}r^\eta \int_0^1 (1-t^2)^{\mu-1}t^\eta dt=+\infty.$$ Hence, for \(2a+1>p\), \(\mathscr{R}_\mu\) does not map \(L^p(d\gamma_a)\) into itself and this completes the proof of theorem.

Combining Proposition (3.1), Theorem (3.2) and Theorem (3.2) , we claim the following interesting result.

Theorem 3.4. For every \(p\in [1,+\infty]\), the fractional operator \(\mathscr{R}_\mu\) is bounded on \(L^p(d\gamma_a)\) if and only if \(2a+1< p\) and in this case $$ ||\mathscr{R}_\mu||_{p,\gamma_a}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2a+1)}{2p})}{\sqrt{\pi}\ \Gamma(\mu+\frac{p-(2a+1)}{2p})}. $$

Remark 3.5. The case \(a=\mu\) in Theorem (3.4) is important because the measure \(d\nu_\mu\) defined by the relation (3) is connected with the operators \(D_j,\ 1\leqslant j\leqslant n\) and \(\Xi\) and the Fourier-Hankel transform \(\widetilde{\mathscr{F}}_{\mu-\frac{1}{2}}\) given by relation (2) and in this occurrence, \(\mathscr{R}_\mu\) is bounded from \(L^p(d\nu_\mu)\) into itself if and only if \(2\mu+1< p\) and we have $$ ||\mathscr{R}_\mu||_{p,\nu_\mu}=\frac{\Gamma(\mu+\frac{1}{2})\Gamma(\frac{p-(2\mu+1)}{2p})}{\sqrt{\pi} \ \Gamma(\mu+\frac{p-(2\mu+1)}{2p})}. $$

3.2. \(L^p\)-boundedness of the fractional transform \(\mathscr{H}_\mu\)

We denote by \( r^{-2\mu}L^p(d\gamma_a)\) the space defined by \(r^{-2\mu}L^p(d\gamma_a)=\big\{f:\ ]0,+\infty[\times \mathbb{R}^n\longrightarrow\mathbb{C},\) \(f\) is measurable and the function \((r,x)\longmapsto r^{2\mu}f(r,x)\) belongs to \(L^p(d\gamma_a) \big \} \)
\(r^{-2\mu}L^p(d\gamma_a)\) is equipped with the norm $$ N_{p,a}(f)= ||r^{2\mu}f||_{p,a}.$$

Theorem 3.6. The operator \(\mathscr{H}_\mu,\ \mu>0\) is bounded from \(r^{-2\mu}L^1(d\gamma_a)\) into \(L^1(d\gamma_a)\) if and only if \(2a+1>0\) and in this case $$N_{1,\gamma_a}(\mathscr{H}_\mu)= \sup_{||r^{2\mu}f||_{1,a}\leqslant1}||\mathscr{H}_\mu(f)||_{1,a}=\frac{\Gamma(\frac{2a+1}{2})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2})}.$$

Proof. Suppose that \(a>-\frac{1}{2}\) and let \(f\in r^{-2\mu}L^1(d\gamma_a)\). We have $$ \big|\mathscr{H}_\mu(f)(r,x)\big|\leq\frac{r^{2\mu}}{2^\mu\ \Gamma(\mu)}\int_1^\infty(t^2-1)^{\mu-1}\big|f(rt,x)\big|2tdt. $$ Applying Fubini-Tonnelli Theorem's, we get \begin{eqnarray*} \int_0^\infty\int_{\mathbb{R}^n}\big|\mathscr{H}_\mu(f)(r,x)\big|d\gamma_a(r,x)&\leqslant & \frac{1}{2^\mu\ \Gamma(\mu) }\int_1^\infty(t^2-1)^{\mu-1}\Big(\int_0^\infty\int_{\mathbb{R}^n}r^{2\mu+2a}|f(tr,x)|drdx\Big)2tdt \\&=&||r^{2\mu}f||_{1,a}\frac{1}{2^\mu\ \Gamma(\mu)}\int_1^\infty(t^2-1)^{\mu-1}t^{-(2\mu+2a+1)}2tdt. \end{eqnarray*} By the change of variable \(s=\frac{1}{t^2}\), we have $$\frac{1}{2^\mu\ \Gamma(\mu)}\int_1^\infty(t^2-1)^{\mu-1}t^{-(2\mu+2a+1)}2tdt=\frac{\Gamma(\frac{2a+1}{2})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2})}.$$ This shows that for every \(f\in r^{-2\mu}L^1(d\gamma_a)\), the function \(\mathscr{H}_\mu(f)\) belongs to \(L^1(d\gamma_a)\) and $$||\mathscr{H}_\mu(f)||_{1,a}\leqslant\frac{\Gamma(\frac{2a+1}{2})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2})}||r^{2\mu}f||_{1,a}$$ On the other hand, for every nonnegative function \(f\in r^{-2\mu}L^1(d\gamma_a)\), we have

\begin{eqnarray}\label{rel3.3} ||\mathscr{H}_\mu(f)||_{1,a}=\frac{\Gamma(\frac{2a+1}{2})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2})}||r^{2\mu}f||_{1,a}. \end{eqnarray}
(25)
Hence, for \(a>-\frac{1}{2}\), the fractional transform \(\mathscr{H}_\mu\) is continuous from \(r^{-2\mu}L^1(d\gamma_a)\) into \(L^1(d\gamma_a)\) and
$$N_{1,\gamma_a}(\mathscr{H}_\mu)=\frac{\Gamma(\frac{2a+1}{2})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2})}.$$ Let \(a\leqslant-\frac{1}{2}\) and let \(f\in r^{-2\mu}L^1(d\gamma_a)\), \(f\) nonnegative function such that
\(||r^{2\mu}f||_{1,a}=1 \). From relation (25) $$||\mathscr{H}_\mu(f)||_{1,a}=+\infty,$$ which proves that for \(a\leq-\frac{1}{2},\) the operator \(\mathscr{H}_\mu\) does not map the space \(r^{-2\mu}L^1(d\gamma_a)\) into \(L^1(d\gamma_a).\)

Theorem 3.7. For every \(p\in]1,+\infty[\), the fractional transform \(\mathscr{H}_\mu\) is bounded from \(r^{-2\mu}L^p(d\gamma_a)\) into \(L^p(d\gamma_a)\) if and only if \(2a+1>0\) and in this case $$N_{p,\gamma_a}(\mathscr{H}_\mu)= \sup_{||r^{2\mu}f||_{p,a}\leqslant1}||\mathscr{H}_\mu(f)||_{p,a}=\frac{\Gamma(\frac{2a+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2p})}.$$

Proof. Let \(a>-\frac{1}{2}\) and \(f\in r^{-2\mu}L^p(d\gamma_a)\). By Minkouski's inequality, we have \begin{eqnarray*} ||\mathscr{H}_\mu(f)||_{p,a} &\leqslant & \frac{1}{2^\mu\ \Gamma(\mu) }\int_1^\infty(t^2-1)^{\mu-1}\Big(\int_0^\infty\int_{\mathbb{R}^n}(r^{2\mu}|f(tr,x)|)^p r^{2a} drdx\Big)^{\frac{1}{p}}2tdt \\&=&||r^{2\mu}f||_{p,a}\frac{1}{2^\mu\ \Gamma(\mu)}\int_1^\infty(t^2-1)^{\mu-1}t^{-\frac{2\mu p+2a+1}{p}}2tdt\\&=&\frac{\Gamma(\frac{2a+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2p})}||r^{2\mu}f||_{p,a}. \end{eqnarray*} Consequently, for \(a>-\frac{1}{2}\), \(\mathscr{H}_\mu\) is a bounded operator from \(r^{-2\mu}L^p(d\gamma_a)\) into \(L^p(d\gamma_a)\) and

\begin{eqnarray}\label{rel3.4} N_{p,\gamma_a}(\mathscr{H}_\mu)&\leqslant& \frac{\Gamma(\frac{2a+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2p})}. \end{eqnarray}
(26)
Let \(\eta \in \mathbb{R},\ \eta>0\), and let $$ f_0(r,x)=r^{-2\mu-\frac{2a+\eta+1}{p}}\textbf{1}_{[1,+\infty[}(r)\Pi_{j=1}^n \textbf{1}_{]0,1[}(x_j).$$ The function \(f_0\) belongs to \(r^{-2\mu}L^p(d\gamma_a)\) and $$||r^{2\mu}f_0||_{p,a} =(\frac{1}{\eta})^{\frac{1}{p}}.$$ Moreover, \begin{eqnarray*} |\mathscr{H}_\mu(f_0)(r,x)| &= & \mathscr{H}_\mu(f_0)(r,x)\\ &\geq&\frac{1}{2^\mu\ \Gamma(\mu) }\Big(\int_r^\infty(t^2-r^2)^{\mu-1}t^{-2\mu-\frac{2a+1+\eta}{p}}2tdt\Big)\textbf{1}_{[1,+\infty[}(r)\Pi_{j=1}^n \textbf{1}_{]0,1[}(x_j)\\&=&\frac{\Gamma(\frac{2a+1+\eta}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1+\eta}{2p})}r^{2\mu}f_0(r,x). \end{eqnarray*} Thus, $$ ||\mathscr{H}_\mu(f_0)||_{p,a} \geqslant \frac{\Gamma(\frac{2a+1+\eta}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1+\eta}{2p})}||r^{2\mu}f_0||_{p,a}$$ and then, for every \(\eta>0,\)
$$N_{p,\gamma_a}(\mathscr{H}_\mu)\geqslant \frac{\Gamma(\frac{2a+1+\eta}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1+\eta}{2p})}.$$ This implies that
\begin{eqnarray}\label{rel3.5} N_{p,\gamma_a}(\mathscr{H}_\mu)&\geqslant&\frac{\Gamma(\frac{2a+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2p})}. \end{eqnarray}
(27)
Combining the relations (26) and (27), we deduce that for \(a>-\frac{1}{2}\), the fractional transform \(\mathscr{H}_\mu\) is a bounded operator from \(r^{-2\mu}L^p(d\gamma_a)\) into \(L^p(d\gamma_a)\) and that
$$N_{p,\gamma_a}(\mathscr{H}_\mu)= \frac{\Gamma(\frac{2a+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2p})}.$$ Now we prove that, for \(a\geq\frac{1}{2},\) the operator \(\mathscr{H}_\mu\) does not map the space \(r^{-2\mu}L^p(d\gamma_{-\frac{1}{2}})\) into \(L^p(d\gamma_{-\frac{1}{2}}).\) We have two cases:
Case 1. Suppose that \(2a+1=0\) and let $$g_0(r,x)=\frac{1}{r^{2\mu}(1+\ln(r))^p}\textbf{1}_{[1,+\infty[}(r)\Pi_{j=1}^n \textbf{1}_{]0,1[}(x_j).$$ The function \(g_0\) belongs to \(r^{-2\mu}L^p(d\gamma_{-\frac{1}{2}})\) and \begin{eqnarray*} ||r^{2\mu}g_0||_{p,-\frac{1}{2}} &= & \big(\int_1^\infty \frac{dr}{r(1+\ln(r))^p}\big)^{\frac{1}{p}}\\&=& \big(\int_0^\infty\frac{du}{(1+u)^p}\big)^{\frac{1}{p}}\\&=&(\frac{1}{p-1})^{\frac{1}{p}}. \end{eqnarray*} But for every \((r,x)\in]1,+\infty[\times]0,1[^n,\) $$\mathscr{H}_\mu(g_0)(r,x)=\int_r^\infty(t^2-r^2)^{\mu-1}\frac{2t}{t^{2\mu}(1+\ln(r))}dt=+\infty.$$ This shows that for \(a=-\frac{1}{2},\) the operator \(\mathscr{H}_\mu\) does not map the space \(r^{-2\mu}L^p(d\gamma_{-\frac{1}{2}})\) into \(L^p(d\gamma_{-\frac{1}{2}}).\)
Case 2. Finally, suppose that \(a< -\frac{1}{2}\) and let \(\eta\in\mathbb{R};\ \frac{1}{2} < \eta < -a.\)
Let $$ h_0(r,x)=r^{-2\mu-\frac{2a+2\eta}{p}}\textbf{1}_{[1,+\infty[}(x)\Pi_{j=1}^n \textbf{1}_{]0,1[}(x_j).$$ The function \(h_0\) belongs to \(r^{-2\mu}L^p(d\gamma_a)\), and $$ ||r^{2\mu}h_0||_{p,a} = \big(\int_1^\infty r^{-2\eta}dr\big)^{\frac{1}{p}}=(\frac{1}{2\eta-1})^{\frac{1}{p}}.$$ However, for every \((r,x)\in]1,+\infty[\times]0,1[^n,\) $$\mathscr{H}_\mu(h_0)(r,x)=\frac{1}{2^\mu\ \Gamma(\mu) }\int_r^\infty(t^2-r^2)^{\mu-1}t^{-2\mu-\frac{2a+2\eta}{p}}2tdt=+\infty,\ \hbox{because}\ a+\eta< 0$$ Hence, for \(a< -\frac{1}{2}\), the operator \(\mathscr{H}_\mu\) does not map the space \(r^{-2\mu}L^p(d\gamma_a)\) into \(L^p(d\gamma_a).\)
The proof of theorem is complete .

Remark 3.8. For every \(a\in\mathbb{R}\), the fractional transform \(\mathscr{H}_\mu\) does not map the space \(r^{-2\mu}L^\infty(d\gamma_a)\) into itself.
In fact, the function \(f(r,x)=r^{2\mu}\textbf{1}_{[1,+\infty[}(r)\) belongs to \(r^{-2\mu}L^\infty(d\gamma_a)\), but for every \((r,x)\in]0,+\infty[\times\mathbb{R}^n\) $$\mathscr{H}_\mu(f)(r,x)=\frac{1}{2^\mu\ \Gamma(\mu) }\int_r^\infty(t^2-r^2)^{\mu-1}t^{2\mu}2tdt=+\infty.$$

We conclude that for every \(p\in[1,+\infty[\), the transform \(\mathscr{H}_\mu,\ \mu>0,\) is bounded from \(r^{-2\mu}L^p(d\gamma_a)\) into \(L^p(d\gamma_a)\) if and only if \(2a+1>0\) and $$N_{p,\gamma_a}(\mathscr{H}_\mu)= \sup_{||r^{2\mu}f||_{p,a}\leqslant1}||\mathscr{H}_\mu(f)||_{p,a}=\frac{\Gamma(\frac{2a+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2a+1}{2p})}.$$ In particular, for \(a=\mu>0\), the fractional transform \(\mathscr{H}_\mu\) is bounded from \(r^{-2\mu}L^p(d\nu_\mu)\) into \(L^p(d\nu_\mu)\) and for every \(f\in r^{-2\mu}L^p(d\nu_\mu)\), $$||\mathscr{H}_\mu(f)||_{p,\nu_\mu}\leqslant\frac{\Gamma(\frac{2\mu+1}{2p})}{2^\mu\ \Gamma(\mu+\frac{2\mu+1}{2p})}||r^{2\mu}f||_{p,\nu_\mu}.$$

Competing Interests

The authors declare that they have no competing interests.

References

  1. Erdelyi, A., Magnus, W., Oberhettinger, F., & Tricomi, F. G. (Eds.). (1954). Tables of Integral Transforms: Vol.: 2. McGraw-Hill Book Company, Incorporated. [Google Scholor]
  2. Erdelyi, A. (1956). Asymptotic expansions. Dover publications, New-York.
  3. Lebedev, N. N.(1972). Special Functions and Their Applications, Dover publications, New-York.
  4. Watson, G. N. (1995). A treatise on the theory of Bessel functions. Cambridge university press.
  5. Amri, B., & Rachdi, L. T. (2013). The Littlewood-Paley g-function associated with the Riemann-Liouville operator. Annales Universitatis Paedagogicae Cracoviensis Studia Mathematica, 12(1), 31-58. [Google Scholor]
  6. Amri, B., & Rachdi, L. T. (2014). Calderon-reproducing formula for singular partial differential operators. Integral Transforms and Special Functions, 25(8), 597-611. [Google Scholor]
  7. Baccar, C., Hamadi, N. B., & Rachdi, L. T. (2006). Inversion formulas for Riemann-Liouville transform and its dual associated with singular partial differential operators. International Journal of Mathematics and Mathematical Sciences, (2006),1-26. [Google Scholor]
  8. Baccar, C., Hamadi, N. B., & Rachdi, L. T. (2008). Best approxmation for Weierstrass transform connected with Riemann-Liouville operator. Communications in Mathematical Analysis, 5(1), 65-83. [Google Scholor]
  9. Hamadi, N. B., & Rachdi, L. T. (2006). Weyl transforms associated with the Riemann-Liouville operator. International journal of mathematics and mathematical sciences, (2006), 1-18. [Google Scholor]
  10. Rachdi, L. T., Amri, B., & Chettaoui, C. (2016). \(L^p\)-Boundedness for the Littlewood-Paley g-Function Con-nected with the Riemann-Liouville Operator. Kyungpook Math. J, 56, 185-220. [Google Scholor]
  11. Amri, B., & Rachdi, L. T. (2013). Beckner logarithmic uncertainty principle for the Riemann–Liouville operator. International Journal of Mathematics, 24(09), 1350070. [Google Scholor]
  12. Amri, B., & Rachdi, L. T. (2016). Uncertainty Principle in Terms of Entropy for the Riemann–Liouville Operator. Bulletin of the Malaysian Mathematical Sciences Society, 39(1), 457-481. [Google Scholor]
  13. Omri, S., & Rachdi, L. T. (2008). Heisenberg-Pauli-Weyl uncertainty principle for the Riemann-Liouville operator. J. Inequal. Pure Appl. Math, 9(3), 1-23.[Google Scholor]
  14. Rachdi, L. T., & Herch, H. (2017). Uncertainty principles for continuous wavelet transforms related to the Riemann–Liouville operator. Ricerche di Matematica, 66(2), 553-578.[Google Scholor]
  15. Kilbas, A. A., & Trujillo, J. J.(2000). Generalized Hankel transforms in \(\mathcal{L}_{vr}\)-spaces, Integral Transforms Spec. Funct.,9 (4), 271-286.
  16. Kilbas, A. A., & Gromak, E. V. (2002). \(\mathcal{Y}_\eta\) and \(\mathcal{H}_\eta\) transforms in \(\mathcal{L}_{vr}\)-spaces. Integral Transforms Spec. Funct., 13 (3), 259-275.
  17. Nhan, N. D. V., & Duc, D. T. (2008). Fundamental inequalities for the iterated Laplace convolution in weighted \(L^p\) spaces and their applications. Integral Transforms Spec. Funct., 19(9), 655-664. [Google Scholor]
  18. Folland, G. B. (2013). Real analysis: modern techniques and their applications. John Wiley & Sons. [Google Scholor]
]]>
Necessary and sufficient condition for a surface to be a sphere https://old.pisrt.org/psr-press/journals/oma-vol-2-issue-2-2018/necessary-and-sufficient-condition-for-a-surface-to-be-a-sphere/ Sun, 14 Oct 2018 08:57:02 +0000 https://old.pisrt.org/?p=1285
OMA-Vol. 2 (2018), Issue 2, pp. 51–52 | Open Access Full-Text PDF
Alexander G. Ramm
Abstract:Let \(S\) be a \(C^{1}\)-smooth closed connected surface in \(\mathbb{R}^3\), the boundary of the domain \(D\), \(N=N_s\) be the unit outer normal to \(S\) at the point \(s\), \(P\) be the normal section of \(D\). A normal section is the intersection of \(D\) and the plane containing \(N\). It is proved that if all the normal sections for a fixed \(N\) are discs, then \(S\) is a sphere. The converse statement is trivial.
]]>
Open Access Full-Text PDF

Open Journal of Mathematical Analysis

Necessary and sufficient condition for a surface to be a sphere

Alexander G. Ramm\(^1\)
Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA.; (A.G.R)
\(^{1}\)Corresponding Author; ramm@math.ksu.edu

Copyright © 2018 Alexander G. Ramm. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Let \(S\) be a \(C^{1}\)-smooth closed connected surface in \(\mathbb{R}^3\), the boundary of the domain \(D\), \(N=N_s\) be the unit outer normal to \(S\) at the point \(s\), \(P\) be the normal section of \(D\). A normal section is the intersection of \(D\) and the plane containing \(N\). It is proved that if all the normal sections for a fixed \(N\) are discs, then \(S\) is a sphere. The converse statement is trivial.

Keywords:

Conditions for a surface to be a sphere.

1. Introduction

Let \(S\) be a \(C^{1}\)-smooth closed connected surface in \(\mathbb{R}^3\), the boundaryof the domain \(D\), \(N=N_s\) be the unit outer normal to \(S\) at the point \(s\). Throughout we assume that \(S\) satisfies these assumptions. Let \(P\) be the normal section of \(D\). A normal section is the intersection of \(D\) and the plane containing \(N\). Our result is the following:

Theorem 1.1. If all the normal sections for a fixed \(N\) are discs, then \(S\) is a sphere. Conversely, if \(S\) is a sphere then all its normal sections are discs.

There are several "characterizations" of the sphere in the literature. We will use the following.

Lemma 1.2. Let \(r=r(p,q)\) be a parametric representation of \(S\). If \([r(p,q), N_s]=0\) for all \(s=s(p,q)\) on \(S\), then \(S\) is a sphere. Here \([r,N]\) is the vector product of two vectors.

A proof of this result can be found in [ 1, 2]. For convenience of the reader a short proof of Lemma 1.2 is given in Section 2.

2. Proof

Theorem 1.1. Let \(s\in S\) be a fixed point and \(P\) be one of the normal sections of \(D\) corresponding to \(N_s\). By assumption, this section is a disc. Let \(O\) be its center and \(R\) be its radius. Rotate \(P\) about \(N_s\). Each of the resulting normal sections is a disc of radius \(R\) centered at \(O\). If \(r=r(p,q)\) is a parametric representation of \(S\) then \([r, N]=0\) for every point of \(S\) because each such point belongs to a boundary of a disc centered at \(O\) with radius \(R\). From Lemma 1.2 it follows that \(S\) is a sphere.

Lemma 1.2. One has \(N=[r_p(p,q), r_q(p,q)]/|[r_p(p,q),r_q(p,q)]|\), where \([a,b]\) is the vector product of \(a\) and \(b\), and \(|a|\) is the length of the vector. Therefore \([r,N]=0\) implies \([r,[r_p(p,q),r_q(p,q)]]=0\) or \(r_p(r,r_q)- r_q(r,r_p)=0\), where \((a,b)\) is the scalar product of two vectors. The vectors \(r_p\) and \(r_q\) are linearly independent since the surface \(S\) is smooth. Thus, \((r,r_q)=0\) and \((r,r_p)=0\). Consequently \((r,r)=const\), that is, \(S\) is a sphere.

Competing Interests

The author declares that he has no competing interests.

References

  1. Ramm, A. G. (2005). Inverse problems. Springer, New York.
  2. Ramm, A. G. (2013). The Pompeiu problem. Global Journ of Math. Anal., 1(1), 01-10. http://www.sciencepubco.com/index.php/GJMA/
]]>