OMS – Vol 4 – 2020 – PISRT https://old.pisrt.org Wed, 27 Jan 2021 07:50:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 On Adomian decomposition method for solving nonlinear ordinary differential equations of variable coefficients https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/on-adomian-decomposition-method-for-solving-nonlinear-ordinary-differential-equations-of-variable-coefficients/ Mon, 28 Dec 2020 14:15:51 +0000 https://old.pisrt.org/?p=4851
OMS-Vol. 4 (2020), Issue 1, pp. 476 - 484 Open Access Full-Text PDF
AbdulAzeez Kayode Jimoh, Aolat Olabisi Oyedeji
Abstract: This paper considers the extension of the Adomian decomposition method (ADM) for solving nonlinear ordinary differential equations of constant coefficients to those equations with variable coefficients. The total derivatives of the nonlinear functions involved in the problem considered were derived in order to obtain the Adomian polynomials for the problems. Numerical experiments show that Adomian decomposition method can be extended as alternative way for finding numerical solutions to ordinary differential equations of variable coefficients. Furthermore, the method is easy with no assumption and it produces accurate results when compared with other methods in literature.
]]>

Open Journal of Mathematical Sciences

On Adomian decomposition method for solving nonlinear ordinary differential equations of variable coefficients

AbdulAzeez Kayode Jimoh\(^1\), Aolat Olabisi Oyedeji
Faculty of Pure and Applied Sciences, Kwara State University, Malete, Nigeria.; (A.K.J & A.O.O)
\(^{1}\)Corresponding Author: abdulazeez.jimoh@kwasu.edu.ng

Abstract

This paper considers the extension of the Adomian decomposition method (ADM) for solving nonlinear ordinary differential equations of constant coefficients to those equations with variable coefficients. The total derivatives of the nonlinear functions involved in the problem considered were derived in order to obtain the Adomian polynomials for the problems. Numerical experiments show that Adomian decomposition method can be extended as alternative way for finding numerical solutions to ordinary differential equations of variable coefficients. Furthermore, the method is easy with no assumption and it produces accurate results when compared with other methods in literature.

Keywords:

Adomian decomposition method, nonlinear ordinary differential equation, Adomian polynomial.

1. Introduction

Differential equation can represent nearly all systems or phenomena undergoing change. They are quotidian in science, engineering and biology as well as in economics, social science, health and business [1]. Depending on the nature of the system at hand, differential equations may be linear, pseudo-linear or nonlinear. Often, systems described by differential equations are so complex, or the systems they represent are so large that a purely analytical treatment may not be tractable.

Many mathematicians have studied differential equations. A simple example is Newton’s second law of motion-the relationship between the displacement \(x\) and the time \(t\) of an object under the force \(F\) is given by the differential equation \[m\frac{d^{2}x(t)}{dt^{2}}=F(x(t)),\] which constrains the motion of a particle of constant mass \(m\). In general, \(F\) is a function of the position \(x(t)\) of the particle at time \(t\).

Adomian decomposition method is a semi-analytical method for solving ordinary and partial nonlinear differential equations [2]. The method developed by George Adomian is also applied to solve both linear and nonlinear boundary value problems (BVPs) and integral equations. The numerical result is obtained with minimum amount of computation [3]. Adomian technique is based on a decomposition of a solution of nonlinear functional equation in a series of functions. Each term of the series is obtained from polynomial generated by a power series expansion of an analytic function [4]. Some of the advantages of the Adomian decomposition are; it can be applied directly for all types of functional equations both linear and nonlinear and it has ability of greatly reducing the size of computational work while still maintaining high accuracy of the numerical solution [4]. Adomian decomposition method (ADM) provides an analytical approximate solution for nonlinear functional equations in terms of a rapidly converging series without linearization, perturbation or discretization [5].

1.1. Methods of generating Adomian polynomials

Consider a functional equation
\begin{equation} \label{1} u=f+L(u)+N(u), \end{equation}
(1)
where \(L\) and \(N\) are linear and nonlinear operators respectively and \(f\) is a known function. By Adomian decomposition method, the solution \(u(x,t)\) of (1) is decomposed in the form of an infinite series
\begin{equation} \label{2} u(x,t)=\sum^{\infty}_{n=0}u_{n}(x,t). \end{equation}
(2)
Furthermore, the nonlinear function \(N(u)\) assumes the following representation:
\begin{equation} \label{3} N(u)=\sum^{\infty}_{n=0}A_{n}(u_{0},u_{1},...,u_{n}), \end{equation}
(3)
where \(A_{n}'s\) are \(nth\) order Adomian polynomials. In the linear case, \(N(u)=u\), \(A_{n}\) simply reduces to \(u_{n}\).

Cherruault and Adomian [6] gave a method for determining these polynomials by parameterizing \(u(x,t)\) as

\begin{equation} \label{4} u\alpha(x,t)=\sum^{\infty}_{n=0}u_{n}(x,t)\alpha^{n}, \end{equation}
(4)
and assuming \(N(u\alpha)\) to be analytic in \(\alpha\), which decomposes as
\begin{equation} \label{5} N(u\alpha)=\sum^{\infty}_{n=0}A_{n}(u_{0},u_{1},...,u_{n})\alpha^{n}. \end{equation}
(5)
Hence, the Adomian polynomials \(A_{n}\) are given by
\begin{equation} \label{6} A_{n}(u_{0},u_{1},...,u_{n})=\frac{1}{n!}\frac{\partial^{n}N(u\alpha)}{\partial \alpha^{n}}|_{\alpha=0} \forall n \in N_{0}, \end{equation}
(6)
where \(N_{m}=[n\in Nu{0} : n\geq m]\) and \(N\) denotes the set of positive integers.

Rach and Baghdasarian [7] suggested the following formulae for determining Adomian polynomials:

\begin{align} &\label{7} A_{0}(u_{0})=N(u_{0}),\\ \end{align}
(7)
\begin{align} &\label{8} A_{n}(u_{0},u_{1},...,u_{n})=\sum^{n}_{k=1}c(k,n)N^{k}(u_{n}) \forall n\in N. \end{align}
(8)
Wazwaz [8] suggested a new algorithm in which after separating \(A_{0}=N(u_{0})\) from other terms of the Taylor series expansion of the nonlinear function \(N(u)\), we collect all terms of the expansion obtained such that the sum of the subscripts of the components of \(u(x,t)\) in each term is the same.

Ibijola and Adegboyegun [9] considered the generalized first order nonlinear differential equation of the form

\begin{equation} \label{9} y'=f(t,y), y\in R^{d}, f : R X R^{1}\longrightarrow R^{1} \end{equation}
(9)
with initial condition \(y(0)=y_{0}\in R^{d}\). In reviewing the basic methodology, an abstract system of differential equation (9) assuming that \(f(t,y)\) is nonlinear and analytic near \(y=y_{0}\), \(t=0\) was considered. It is equivalent to solving the initial value problem (9) and the solution is obtained as
\begin{equation} \label{10} f(t,y)=\sum^{\infty}_{n=0}A_{n}(t, y_{0}, y_{1},...,y_{n}). \end{equation}
(10)
The dependence of \(A_{n}\) on \(t\) and \(y_{0}\) may be non-polynomial. Formally, \(A_{n}\) is obtained by
\begin{equation} \label{11} A_{n}=\frac{1}{n!}\frac{d^{n}}{d\rho^{n}}f(t, \sum^{}_{}\rho^{n}y_{1})|_{t=0}~~~n=0,1,2,..., \end{equation}
(11)
where \(\rho\) is a formal parameter. Functions \(A_{n}\) are polynomials in \(y_{1},y_{2},...,y_{n}\), which are referred to as the Adomian polynomials. The first few Adomian polynomials for \(d=1\) are listed by Zhu et al., [10] as:
\begin{equation} \label{12}\begin{cases}A_{0}=f(t,y_{0}), & A_{1}=y_{1}f'(t,y_{0}),\\A_{2}=y_{2}f'(t,y_{0})+\frac{1}{2}{y_{1}}^{2}f''(t,y_{0}), & A_{3}=y_{3}f'(t,y_{0})+y_{1}y_{2}f''(t,y_{0})+\frac{1}{6}{y_{1}}^{3}f'''(t,y_{0}), \end{cases} \end{equation}
(12)
where primes denote the partial derivatives with respect to \(y\). It was shown by Himoun et al., [11] that the Adomian polynomials \(A_{n}\) are defined by the explicit formula:
\begin{equation} \label{13} A_{n}=\sum^{n}_{k=1}\frac{1}{k!}f^{k}(t,y_{0})(\sum^{}_{p_{1}+...+p_{k}=p}y_{p_{1}}...y_{p_{k}}),~ n\geq 1. \end{equation}
(13)
Abbaoui and Cherruant [12] proved a bound for Adomian polynomials and obtained
\begin{equation} \label{14} |A_{n}|\leq \frac{(n+1)^{n}}{(n+1)!}M^{n+1}, \end{equation}
(14)
where \(\sup_{t\in s}|f^{k}(t,y_{0})|\leq M\) for a given time interval \(j\subset R\).

Basically, there are two methods of generating Adomian's polynomials using the orthogonality of functions \((e^{ln x},~n\in Z)\). The first method determines these polynomials explicitly whereas the second method generates them recursively. Different forms of nonlinearity are discussed in literature [13]. Cherruault and Adomian [14] suggested the following:

  • 1. The series solution
    \begin{equation} \label{15} u=\sum^{\infty}_{k=0}u_{k} \end{equation}
    (15)
    is absolutely convergent.
  • 2. The nonlinear function \(N(u)\) admits the representation
    \begin{equation} \label{16} N(u)=\sum^{\infty}_{k=0}N^{(k)}(0)\frac{u^{k}}{k!},~~|u|< \infty. \end{equation}
    (16)
The assumption (16) is almost always satisfied in concrete physical problems. By (15) and (16), we have the form of Adomian series as a generalization of Taylor series:
\begin{equation} \label{17} N(u)=\sum^{\infty}_{k=0}A_{k}(u_{0},u_{1},...,u_{k})=\sum^{\infty}_{k=0}N{(k)}(u_{0})\frac{(u-u_{0})^{k}}{k!}, \end{equation}
(17)
\begin{equation} \label{18} u_{\rho}(x,t)=\sum^{\infty}_{k=0}u_{k}(x,t)f^{(k)}(\rho), \end{equation}
(18)
and
\begin{equation} \label{19} \bar{u_{\rho}}(x,t)=\sum^{\infty}_{k=0}\bar {u_{k}}(x,t)f^{(k)}(\rho), \end{equation}
(19)
where \(\rho\) is a real parameter and \(f\) is any real or complex valued function with \(|f|< 1\). So, series (19) is also absolutely convergent.

Now, take

\begin{equation} \label{20} N(u_{\rho})=\sum^{\infty}_{k=0}\frac{N^{k}(u_{0})}{k!}(\sum^{\infty}_{j=1}u_{j}(x,t)f^{j}(\rho))^{k}. \end{equation}
(20)
Since \(\sum^{\infty}_{j=1}u_{j}(x,t)f^{j}(\rho)\) is absolutely convergent. By re-arrangement of terms in the right hand side of (20), we can write \(N(u_{\rho})\) as:
\begin{equation} \label{21} N(u_{\rho})=\sum^{\infty}_{k=0}A_{k}f^{k}(\rho),\end{equation}
(21)
where \(A_{k}'s\) are Adomian polynomials. Hence
\begin{align*}N(u_{\rho})&=N(u_{0})+N^{(1)}(u_{0})[u_{1}f(\rho)+u_{2}f^{2}(\rho)+...]+\frac{N^{(2)}(u_{0})}{2!}[u_{1}f(\rho)+u_{2}f^{2}(\rho)+...]^{2}\\ &\;\;\;+\frac{N^{(3)}(u_{0})}{3!}[u_{1}f(\rho)+u_{2}f^{2}(\rho)+...]^{3}+...\\ &=N(u_{0})+N^{(1)}(u_{0})u_{1}f(\rho)+[N^{(1)}(u_{0})u_{2}+N^{(2)}(u_{0})\frac{{u_{1}}^{2}}{2!}]f^{2}(\rho)\end{align*} \begin{align}\label{22} &\;\;\;+[N^{(1)}(u_{0})u_{3}+N^{(2)}(u_{0})u_{1}u_{2}+N^{(3)}(u_{0})\frac{{u_{1}}^{3}}{3!}]f^{3}(\rho)+...\notag\\ &=\sum^{\infty}_{k=0}A_{k}(u_{0},u_{1},...u_{k})f^{k}(\rho).\end{align}
(22)
Note that \(A_{k}'s\) are polynomials in \(u_{0}\), \(u_{1}\),...,\(u_{k}\) only.

2. Adomian polynomial solutions of ordinary differential equations

The generalized first order nonlinear equation considered is given by
\begin{equation} \label{23} y'=f(x,y), \end{equation}
(23)
with initial value
\begin{equation} \label{24} y(x_{a})=y_{a}. \end{equation}
(24)
Many authors have used different forms of methods to solve (23) and (24) of constant coefficients. A few of these solution techniques are decomposition methods [15], differential transform method [16], double decomposition method [17], Taylor series method with numerical derivatives [18], homotopy perturbation method [19], projected differential transform method [20], generalized differential transform method [21], Picard iteration method [9] and Adomian decomposition method [7,22].

The main goal of this article is to extend the Adomian decomposition method by modification in order to obtain a polynomial solution of (23) and (24). Adomian decomposition method (ADM) solves nonlinear operator equations for any analytic nonlinearity providing an easily computable, readily verifiable and rapidly convergent sequence of analytic approximate solutions. Since it was first presented in the 1980’s, Adomian decomposition method has led to several modifications on the method made by various researchers in an attempt to improve the accuracy or expand the application of the original method [23]. The choice of decomposition is non-unique and provides a valuable advantage to the analyst, permitting the freedom to design modified recursion schemes for ease of computation in realistic systems [23].

In order to obtain the Adomian polynomial solution of (23) and (24), we write the nonlinear variable coefficient equation (23) in its operator form as:

\begin{equation} \label{25} Ly+Ry+Ny =F, \end{equation}
(25)
where \(F\) is a known function and \(y\) is the unknown function to be determined, \(L\) is the linear operator to be inverted, \(R\) is the linear remainder operator and \(N\) is the nonlinear operator which is assumed to be analytic. We stressed that the choice for \(L\) and its pair \(L^{-1}\) (inverse of \(L\)) are determined by the equation being considered, hence the choice is non-unique. Here, we choose our \(L\) to be \(L=\frac{d}{dx}(.)\) and thus its inverse \(L^{-1}\) follows as the one-fold definite integration operator from \(x_{0}\) to \(x\). Thus, we have \(L^{-1}Ly=y-\psi,\) which assumes the initial value \(\psi=y_{a}\).

For \(nth\)-order differential equation, the choice of \(L\) is \(L=\frac{d^{n}}{dx^{n}}(.)\) and its inverse \(L^{-1}\) is the \(n\)-fold definite integration operator from \(x_{0}\) to \(x\). Thus, \(\psi\) absorbs the initial value \(\psi=\sum^{n-1}_{k=0}\alpha_{k}\frac{(x-x_{0})^{k}}{k!}.\) Applying the inverse linear operator \(L^{-1}\) to both sides of Equation (25), we obtain

\begin{equation} \label{26} y=\beta(x)-L^{-1}[Ry+Ny], \end{equation}
(26)
where \(\beta(x)=\psi+L^{-1}F\).

The unknown function \(y\) is expressed in a series of the form:

\begin{equation} \label{27} y=\sum^{\infty}_{k=0}y_{k}, \end{equation}
(27)
and the nonlinear term \(Ny\) is decomposed into a series:
\begin{equation} \label{28} Ny=\sum^{\infty}_{k=0}A_{k}, \end{equation}
(28)
where the \(A_{k}'s\), which depend on \(y_{0}\),\(y_{1}\),...,\(y_{k}\) are called the Adomian polynomials and are obtained for the nonlinearity \(Ny=f(y)\) by
\begin{equation} \label{29} A_{k}=\frac{1}{k!}\frac{\partial^{k}}{\partial \lambda^{k}}[f(\sum^{}_{}y_{n}\lambda^{n})]_{\lambda=0},~~k=0,1,2,..., \end{equation}
(29)
where \(\lambda\) is a formal parameter.

The first few Adomian polynomials for the one variable simple analytic nonlinearity \(Ny=f(y(x))\) have been listed by Zhu et al., [10] from \(A_{0}\) through \(A_{3}\), inclusively.

However, in this work, for equations with variable coefficients, we modified the above expressions for \(A_{0}\) through \(A_{4}\), inclusively, as

\begin{equation} \label{30} \begin{cases}A_{0}=f(t_{0},y_{0}),\\ A_{1}=y_{1}f'(t_{0},y_{0}),\\ A_{2}=y_{2}f'(t_{0},y_{0})+\frac{1}{2}{y_{1}}^{2}f''(t_{0},y_{0}),\\ A_{3}=y_{3}f'(t_{0},y_{0})+y_{1}y_{2}f''(t_{0},y_{0})+\frac{1}{6}{y_{1}}^{3}f'''(t_{0},y_{0}),\\ A_{4}=y_{4}f'(t_{0},y_{0})+\frac{1}{2}{y_{2}}^{2}f''(t_{0},y_{0})+\frac{1}{6} y_{1}y_{3}f'''(t_{0},y_{0})+\frac{1}{24}{y_{1}}^{4}f^{(4)}(t_{0},y_{0}), \end{cases} \end{equation}
(30)
where primes denote total derivatives of \(f(t, y)\) at \((t_{0},y_{0})\).

Using the \(A_{k}'s\) in (26)-(28), we have the recursive formula

\begin{equation} \label{31} y_{n+1}=\int^{x}_{0}A_{n}[t,y_{0}(t),y_{1}(t),...,y_{n}(t)]dt,~~n=0,1,2,.... \end{equation}
(31)
The \(y_{k}\)'s \((k=0,~1,~2,...,n)\) are then substituted into (27) to obtain the approximate solution.

2.1. Evaluation of the error

In this paper, error is defined as \[\text{Error}=\max_{a\leq x\leq b}\left|\text{Exact Value}-\text{Approximate Value}\right|.\] In case the exact solution is not available, the approximate solution is compared with those in literature.

2.2. Illustrative examples

The Adomian decomposition method (ADM) as extended and modified is demonstrated on some examples for first order nonlinear ordinary differential equations of variable coefficients. The results obtained are tabulated for comparison.

Example 1. Consider the first order nonlinear differential equation [24]:

\begin{equation} \label{32} y'=xy^{2}+1,\end{equation}
(32)
with the initial condition \(y(0)=1\).
Table 1. Numerical results for Example 1.
x Solution by TSM Solution by ADM
0.0 - 1.00000
0.1 - 1.10593
0.2 1.22600 1.22832
0.3 - 1.37617
0.4 1.54210 1.56171
0.5 - 1.80078
0.6 - 2.11336
0.7 - 2.52396
0.8 - 3.06208
0.9 - 3.76270
1.0 - 4.66667

Table 1 shows that the solution by Taylor series method (TSM) and Adomian Decomposition method (ADM) are very close to each other at the points \(x=0.2\) and \(x=0.4\). Hyphens indicate that the function values are not available in literature.

Figure 1. The behaviour of the Taylor series method compared with the solutions using Adomian decomposition method (Example 1).

Figure 1 shows the behaviour of the Taylor series method compared with the solutions using Adomian decomposition method. Gap begins to exist between the two curves starting from the point \(x=0.6\).

Example 2. Consider the first order nonlinear differential equation [24]:

\begin{equation} \label{33} y'=xy^{2},\end{equation}
(33)
with the initial condition \(y(0.1)=1.\) The exact solution is \(y(x)=\frac{2}{2.01-x^{2}}\) and the solution by the Adomian Decomposition method is \(y(x)=0.98973+0.169367x-1.64583x^{2}+13.4375x^{3}-41.6667x^{4}+52.0833x^{5}\).
Table 2. Numerical results for Example 2.
x Exact solution Solution by ADM Error
0.10 1.00000 1.00000 0.0000E-5
0.11 1.00105 1.00106 1.0000E-5
0.12 1.00220 1.00223 3.0000E-5
0.13 1.00346 1.00348 2.0000E-5
0.14 1.00482 1.00485 3.0000E-5
0.15 1.00629 1.00631 2.0000E-5
0.16 1.00786 1.00789 3.0000E-5
0.17 1.00954 1.00956 2.0000E-5
0.18 1.01133 1.01136 3.0000E-5
0.19 1.01322 1.01324 2.0000E-5
0.20 1.01523 1.01527 4.0000E-5

Figure 2. The behaviour of the exact solution and the approximate solution using Adomian decomposition method (Example 2).

Table 2 shows that the results by Adomian decomposition method (ADM) are very close to the exact values in terms of the absolute errors produced. Figure 2 shows the behaviour of the exact solution and the approximate solution using Adomian decomposition method. The curve by the Adomian decomposition method moves very close to the curve by the exact solution to the extent that gap between the two curves is not noticeable to the naked eyes.

Example 3. Consider the first order nonlinear ordinary differential equation [25]:

\begin{equation} \label{34} y'=x^{2}+y^{2},\end{equation}
(34)
with the initial condition \(y(0)=1.\) The solution by the Adomian decomposition method is \(y(x)=1+x+x^{2}+\frac{4}{3}x^{3}+\frac{5}{3}x^{4}+\frac{16}{15}x^{5}.\)
Table 3. Numerical results for Example 3.
x Solution by TSMO(4) Solution by ADM
0.0 - 1.00000
0.1 - 1.11151
0.2 1.25253 1.25367
0.3 - 1.44209
0.4 1.69318 1.69892
0.5 - 2.05417
0.6 - 2.54694
0.7 - 3.22677
0.8 - 4.15486
0.9 - 5.40536
1.0 - 7.06667

Table 3 shows that the solution by Taylor series method of order four (TSO(4)) produces results that are very close to the results produced by Adomian decomposition method (ADM) at the points \(x=0.2\) and \(x=0.4\). Hyphens indicate that the function values are not available in literature.

Figure 3. The behaviour of the Taylor series method of order four compared with the solutions using Adomian decomposition method (Example 3).

Figure 3 shows the behaviour of the Taylor series method of order four compared with the solutions using Adomian decomposition method. Only the two points available for the Taylor series method of order four are plotted (blue line). The two curves almost coincide with each other in the interval \((0.2,~0.4)\).

Example 4. Consider the first order differential equation [24]:

\begin{equation} \label{35} y'=\frac{y}{x}-\frac{5}{2} x^{2}y^{3},\end{equation}
(35)
with the initial condition \(y(1)=\frac{1}{\sqrt{2}}.\) The solution by the Adomian Decomposition method is \(y(x)=-1.34642+7.33484x-11.4929x^{2}+10.6446x^{3}-5.4401x^{4}+1.30000x^{5}\).
Table 4. Numerical results for Example 4.
x Solution by MEM Solution by ADM
1.0 1.00000 1.00000
1.1 - 1.11227
1.2 1.25478 1.25371
1.3 - 1.44140
1.4 1.69912 1.69808
1.5 - 2.05371
1.6 2.57034 2.54703
1.7 - 3.22713
1.8 4.27532 4.15499
1.9 - 5.40508
2.0 7.20134 7.06686

Table 4 shows that the modified Euler's method (MEM) and Adomian decomposition method (ADM) produce results that are very close to one-another at the points \(x=1.0,~1.2,~1.4,~1.6,~1.8\) and \(2.0\). Hyphens indicate that the function values are not available in literature.

Figure 4. The behaviour of the modified Euler’s method compared with the solutions using Adomian decomposition method (Example 4).

Figure 4 shows the behaviour of the modified Euler's method compared with the solutions using Adomian decomposition method. The two curves by the Adomian decomposition method and modified Euler's method are very close to each other between the points \(x=0\) and \(x=0.4\). However, a noticeable gap starts to exist between them as from the point \(x=0.4\).

3. Discussion of results

The Adomian decomposition method (ADM) has been extended and discussed for the numerical solution of nonlinear ordinary differential equations of variable coefficients. The results obtained were compared with the exact solutions (where available) and some existing results in literature. The absolute errors obtained for example 2, as presented in Table 2 show that the results by the Adomian decomposition method (ADM) as extended are in excellent agreement with the exact solutions. Similarly, the results obtained compared well with those in literature as shown in Tables 1,3 and 4.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abbasbandy, S., & Shivanian, E. (2009). Application of variational iteration method for nth-order integro-differential equations. Zeitschrift für Naturforschung A, 64(7-8), 439-444. [Google Scholor]
  2. Wazwaz, A. M. (2000). A note on using Adomian decomposition method for solving boundary value problems. Foundations of Physics Letters, 13(5), 493-498. [Google Scholor]
  3. Hosseini, M. M., & Nasabzadeh, H. (2007). Modified Adomian decomposition method for specific second order ordinary differential equations. Applied Mathematics and Computation, 186(1), 117-123. [Google Scholor]
  4. Hosseini, M. M., & Nasabzadeh, H. (2006). On the convergence of Adomian decomposition method. Applied Mathematics and Computation, 182(1), 536-543. [Google Scholor]
  5. Hasan, Y. Q., & Zhu, L. M. (2009). Solving singular boundary value problems of higher-order ordinary differential equations by modified Adomian decomposition method. Communications in Nonlinear Science and Numerical Simulation, 14(6), 2592-2596.[Google Scholor]
  6. Cherruault, Y., & Adomian, G. (1993). Decomposition methods: a new proof of convergence. Mathematical and Computer Modelling, 18(12), 103-106. [Google Scholor]
  7. Rach, R., & Baghdasarian, A. (1990). On approximate solution of a nonlinear differential equation. Applied Mathematics Letters, 3(3), 101-102. [Google Scholor]
  8. Wazwaz, A. M. (2000). A note on using Adomian decomposition method for solving boundary value problems. Foundations of Physics Letters, 13(5), 493-498. [Google Scholor]
  9. Ibijola, E. A., Adegboyegun, B. J., & Halid, O. Y. (2008). On Adomian Decomposition Method (ADM) for numerical solution of ordinary differential equations. Advances in Natural and Applied Sciences, 2(3), 165-170. [Google Scholor]
  10. Zhu, Y., Chang, Q., & Wu, S. (2005). A new algorithm for calculating Adomian polynomials. Applied Mathematics and Computation, 169(1), 402-416.[Google Scholor]
  11. Himoun, N. Abbaoui, K. & Cherruault, Y. (2003). Short new results on Adomian method. Kybernetes, 32, 523-539. [Google Scholor]
  12. Abbaoui, K., & Cherruault, Y. (1995). New ideas for proving convergence of decomposition methods. Computers & Mathematics with Applications, 29(7), 103-108. [Google Scholor]
  13. Abbaoui, K., & Cherruault, Y. (1994). Convergence of Adomian's method applied to nonlinear equations. Mathematical and Computer Modelling, 20(9), 69-73.[Google Scholor]
  14. Cherruault, Y., & Adomian, G. (1993). Decomposition methods: a new proof of convergence. Mathematical and Computer Modelling, 18(12), 103-106. [Google Scholor]
  15. Bigi, D., & Riganti, R. (1986). Solutions of nonlinear boundary value problems by the decomposition method. Applied Mathematical Modelling, 10(1), 49-52. [Google Scholor]
  16. Arikoglu, A. & Ozkol, I. (2007). Solution of fractional differential equations by using differential transform method. Computational Mathematics Applications, 41, 1237-1244.
  17. Yang, Y. T., & Chien, S. K. (2008). A double decomposition method for solving the periodic base temperature in convective longitudinal fins. Energy Conversion and Management, 49(10), 2910-2916. [Google Scholor]
  18. Miletics, E., & Molnárka, G. (2004). Taylor series method with numerical derivatives for initial value problems. Journal of Computational Methods in Sciences and Engineering, 4(1-2), 105-114. [Google Scholor]
  19. He, J. H. (2003). Homotopy perturbation method: a new nonlinear analytical technique. Applied Mathematics and Computation, 135(1), 73-79. [Google Scholor]
  20. Jang, B. (2010). Solving linear and nonlinear initial value problems by the projected differential transform method. Computer Physics Communications, 181(5), 848-854. [Google Scholor]
  21. Zou, L., Wang, Z., & Zong, Z. (2009). Generalized differential transform method to differential-difference equation. Physics Letters A, 373(45), 4142-4151. [Google Scholor]
  22. Adomian, G. (1993). Solving Frontier Problems of Physics: The Decomposition Method. Springer, New York.
  23. Duan, J. S., & Rach, R. (2011). A new modification of the Adomian decomposition method for solving boundary value problems for higher order nonlinear differential equations. Applied Mathematics and Computation, 218(8), 4090-4118. [Google Scholor]
  24. Griffiths, D. F., & Higham, D. J. (2010). Numerical methods for ordinary differential equations: initial value problems. Springer Science & Business Media. [Google Scholor]
  25. Jain, M. K., Iyengar, S. R. K. & Jain, R. K. (2012). Numerical Methods for Scientific and Engineering Computation (Sixth Edition). New Age International Publishers.
]]>
On the entire Zagreb indices of the line graph and line cut-vertex graph of the subdivision graph https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/on-the-entire-zagreb-indices-of-the-line-graph-and-line-cut-vertex-graph-of-the-subdivision-graph/ Wed, 23 Dec 2020 12:10:35 +0000 https://old.pisrt.org/?p=4816
OMS-Vol. 4 (2020), Issue 1, pp. 470 - 475 Open Access Full-Text PDF
H. M. Nagesh, Girish V. R
Abstract: Let \(G=(V,E)\) be a graph. Then the first and second entire Zagreb indices of \(G\) are defined, respectively, as \(M_{1}^{\varepsilon}(G)=\displaystyle \sum_{x \in V(G) \cup E(G)} (d_{G}(x))^{2}\) and \(M_{2}^{\varepsilon}(G)=\displaystyle \sum_{\{x,y\}\in B(G)} d_{G}(x)d_{G}(y)\), where \(B(G)\) denotes the set of all 2-element subsets \(\{x,y\}\) such that \(\{x,y\} \subseteq V(G) \cup E(G)\) and members of \(\{x,y\}\) are adjacent or incident to each other. In this paper, we obtain the entire Zagreb indices of the line graph and line cut-vertex graph of the subdivision graph of the friendship graph.
]]>

Open Journal of Mathematical Sciences

On the entire Zagreb indices of the line graph and line cut-vertex graph of the subdivision graph

H. M. Nagesh\(^1\), Girish V. R
Department of Science and Humanities, PES University-Electronic City Campus, Bangalore – 560 100, India.; (H.M.N & G.V.R)
\(^{1}\)Corresponding Author: nageshhm@pes.edu

Abstract

Let \(G=(V,E)\) be a graph. Then the first and second entire Zagreb indices of \(G\) are defined, respectively, as \(M_{1}^{\varepsilon}(G)=\displaystyle \sum_{x \in V(G) \cup E(G)} (d_{G}(x))^{2}\) and \(M_{2}^{\varepsilon}(G)=\displaystyle \sum_{\{x,y\}\in B(G)} d_{G}(x)d_{G}(y)\), where \(B(G)\) denotes the set of all 2-element subsets \(\{x,y\}\) such that \(\{x,y\} \subseteq V(G) \cup E(G)\) and members of \(\{x,y\}\) are adjacent or incident to each other. In this paper, we obtain the entire Zagreb indices of the line graph and line cut-vertex graph of the subdivision graph of the friendship graph.

Keywords:

First Zagreb index, second Zagreb index, entire Zagreb index, subdivision graph.

1. Introduction

Throughout this paper, only the finite, undirected, and simple graphs will be considered. Let \(G\) be such a graph with vertex set \(V(G)=\{v_1,v_2,\ldots,v_n\}\) and edge set \(E(G)\), where \(|V(G)|=n\) and \(|E(G)|=m\). These two basic parameters \(n\) and \(m\) are called the \(order\) and \(size\) of \(G\), respectively. The edge connecting the vertices \(u\) and \(v\) will be denoted by \(uv\). The \(degree\) of a vertex \(v\), written \(d_{G}(v)\), is the number of edges of \(G\) incident with \(v\), each loop counting as two edges.

Among the oldest and most studied topological indices, there are two classical vertex-degree based topological indices - the first Zagreb index and second Zagreb index. These two indices first appeared in [1], and were elaborated in [2]. The main properties of \(M_1(G)\) and \(M_2(G)\) were summarized in [3,4]. The first Zagreb index \(M_1(G)\) and the second Zagreb index \(M_2(G)\) of a graph \(G\) are defined, respectively, as

\begin{equation} \label{e1} M_{1}=M_{1}(G)=\displaystyle \sum_{v \in V(G)} d_{G} (v)^2, \end{equation}
(1)
\begin{equation} \label{e2} M_{2}=M_{2}(G)=\displaystyle \sum_{uv \in E(G)} d_{G}(u)d_{G}(v). \end{equation}
(2)
In fact, one can rewrite the first Zagreb index as
\begin{equation} \label{e3} M_{1}=M_{1}(G)=\displaystyle \sum_{uv \in E(G)} [d_{G}(u)+d_{G}(v)]. \end{equation}
(3)
During the past decades, numerous results concerning Zagreb indices have been put forward [5,6,7,8,9], for historical details, see [3].

In 2008, bearing in mind expression (3), Došlic put forward the first Zagreb coindex, defined as [10]

\begin{equation} \label{e4} \overline{M_1}=\overline{M_{1}}(G)=\displaystyle \sum_{uv \notin E(G)} [d_{G}(u)+d_{G}(v)]. \end{equation}
(4)
In view of expression (4), the second Zagreb coindex is defined analogously as [10]
\begin{equation} \label{e5} \overline{M_2}=\overline{M_{2}}(G)=\displaystyle \sum_{uv \notin E(G)} d_{G}(u)d_{G}(v). \end{equation}
(5)
In expressions (4) and (5), it is assumed that \(u \neq v\).

Furtula and Gutman [11] introduced the forgotten index of \(G\), written \(F(G)\), as the sum of cubes of vertex degrees as follows;

\begin{equation*} F(G)=\displaystyle \sum_{v \in V(G)} d_{G} (v)^3=\displaystyle \sum_{e=uv \in E(G)} \left[d_{G}(u)^{2}+d_{G}(v)^{2}\right]. \end{equation*} Milicevic et al., [12] introduced the first and second reformulated Zagreb indices of a graph \(G\) as edge counterpart of the first and second Zagreb indices, respectively, as follows; \begin{equation*} EM_{1}(G)=\displaystyle \sum_{e \sim f} \left[d_{G}(e)+d_{G}(f)\right]=\displaystyle \sum_{e \in E(G)} d_{G}(e)^{2}, \end{equation*} \begin{equation*} EM_{2}(G)=\displaystyle \sum_{e \sim f} d_{G}(e)d_{G}(f), \end{equation*} where \(d_{G}(e)=d_{G}(u)+d_{G}(v)-2\) for the edge \(e=uv\) and \(e \sim f\) means that the edges \(e\) and \(f\) are incident.

Alwardi et al., [13] introduced the first and second entire Zagreb indices of a graph \(G\) as follows;

\begin{equation*} M_{1}^{\varepsilon}(G)=\displaystyle \sum_{x \in V(G) \cup E(G)} (d_{G}(x))^{2}, \end{equation*} \begin{equation*} M_{2}^{\varepsilon}(G)=\displaystyle \sum_{\{x,y\}\in B(G)} d_{G}(x)d_{G}(y), \end{equation*} where \(B(G)\) denotes the set of all 2-element subsets \(\{x,y\}\) such that \(\{x,y\} \subseteq V(G) \cup E(G)\) and members of \(\{x,y\}\) are adjacent or incident to each other.

The subdivision graph of a graph \(G\), written \(S(G)\), is the graph obtained from \(G\) by replacing each of its edges by a path of length 2, or equivalently by inserting an additional vertex into each edge of \(G\). The friendship graph, written \(F_{n}\), \(n\geq 2\), is a planar undirected graph with \(2n+1\) vertices and \(3n\) edges. The friendship graph can be constructed by joining \(n\) copies of the cycle graph \(C_3\) with a vertex in common.

There are many graph operators (or graph valued functions) with which one can construct a new graph from a given graph, such as the line graphs, line cut-vertex graphs; total graphs; and their generalizations. The line graph of a graph \(G\), written \(L(G)\), is the graph whose vertices are the edges of \(G\), with two vertices of \(L(G)\) adjacent whenever the corresponding edges of \(G\) have a vertex in common.

In [14], the Zagreb indices and coindices of the line graphs of the subdivision graphs were studied.

The author in [15] gave the following definition. The line cut-vertex graph of \(G\), written \(L_{c}(G)\), is the graph whose vertices are the edges and cut-vertices of \(G\), with two vertices of \(L_{c}(G)\) adjacent whenever the corresponding edges of \(G\) have a vertex in common; or one corresponds to an edge \(e_i\) of \(G\) and the other corresponds to a cut-vertex \(c_j\) of \(G\) such that \(e_i\) is incident with \(c_j\). Clearly, \(L(G) \subseteq L_{c}(G)\), where \(\subseteq\) is the subgraph notation. Figure 1 shows an example of a graph \(G\) and its line cut-vertex graph \(L_{c}(G)\).

Figure 1. A graph \(G\) and its line cut-vertex graph \(L_{c}(G)\).

In this paper we study the line graph and line cut-vertex graph of the subdivision graph of the friendship graph; and calculate the entire Zagreb indices of the graphs \(L(S(F_{n}))\) and \(L_{c}(S(F_{n}))\). Notations and definitions not introduced here can be found in [16].

2. Entire Zagreb indices of the line graph of the subdivision graph of the friendship graph \(F_n, n \geq 2\)

In this section we calculate the entire Zagreb indices of the line graph of the subdivision graph of the friendship graph.

Theorem 1. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \(M_1(G)=8n^{3}+16n\) and \(M_{2}(G)=8n^{4}-4n^{3}+8n^{2}+12n\).

Proof. The subdivision graph \(S(F_{n})\) contains \(5n+1\) vertices and \(6n\) edges, so that the line graph of \(S(F_{n})\) contains \(6n\) vertices, out of which \(2n\) vertices of are of degree \(2n\) and the remaining \(4n\) vertices are of degree \(2\). Thus \(M_1(G)=8n^{3}+16n\).

Now, in order to find \(M_2(G)\), we first find the size of \(L(S(F_{n}))\). Every \(L(S(F_{n}))\) contains exactly one copy of \(K_{2n}\) and \(5n\) edges. Thus the size of \(L(S(F_{n}))\) is \(|E(L(S(F_{n})|=2n^{2}+4n\). Out of these edges, \(3n\) edges whose end vertices are of degree 2; \(2n\) edges whose end vertices have degree \(2\) and \(2n\); and the remaining \(n(2n-1)\) edges whose end vertices have degree \(2n\). Thus \(M_2(G)=8n^{4}-4n^{3}+8n^{2}+12n\).

Gutman et al., in [8] established a complete set of relations between first and second Zagreb index and coindex of a graph as follows;

Theorem 2. Let \(G\) be a graph with \(n\) vertices and \(m\) edges. Then \begin{equation} \overline{M_{1}}(G)=2m(n-1)-M_1(G), \\ \overline{M_{2}}(G)=2m^2-\frac{1}{2} M_1(G)-M_2(G). \end{equation} We now give the expressions for the first and second Zagreb coindices of the line graph of the subdivision graph of the friendship graph using Theorem 2.

Theorem 3. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \(\overline{M_1}(G)=16n^3+44n^2-24n\).

Proof. The order and size of \(G\) are \(6n\) and \(2n^{2}+4n\), respectively. Then Theorem 1 and Theorem 2, give us the result.

Theorem 4. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \(\overline{M_2}(G)=32n^{3}+24n^{2}-20n\).

Proof. Theorem 1 and Theorem 2, give us the result.

We now find the forgotten index; and first and second reformulated Zagreb indices of the line graph of the subdivision graph of the friendship graph.

Proposition 1. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \(F(G)=16n^4+32n\).

Theorem 5. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \(EM_{1}(G)=32n^4-40n^3+24n^2+8n\).

Proof. The size of \(G\) is \(2n^2+4n\), out of which \(3n\) edges are of degree \(2\); \(2n\) edges are of degree \(2n\); and the remaining \(n(2n-1)\) edges are of degree \(4n-2\). Then \(EM_{1}(G)=32n^4-40n^3+24n^2+8n\).

Theorem 6. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \(EM_{2}(G)=\frac{1}{2}\left(108n^8-300n^7+128n^6+219n^5-179n^4-4n^3+36n^2-8n\right)\).

Proof. Let \(G\) be the line graph of the subdivision graph of the friendship graph. We consider the following four cases:

Case \(1\): There are \(2n\) pairs of edges with degree \(2\). Then the second reformulated Zagreb index is \(8n\).

Case \(2\): There are \(2n\) pairs of edges with degree \(2\) and \(2n\). Then the second reformulated Zagreb index is \(8n^2\).

Case \(3\): There are \(2n(n-1)\) pairs of edges with degree \(2n\) and \(4n-2\). Then the second reformulated Zagreb index is \(4n^2(2n-1)(4n-2)\).

Case \(4\): There are \(\frac{(2n^2-n)(2n^2-n-1)(2n^2-n-2)}{2}\) pairs of edges with degree \(4n-2\). Then the second reformulated Zagreb index is \((4n-2)^2\left(\frac{(2n^2-n)(2n^2-n-1)(2n^2-n-2)}{2}\right)\).

From all the cases mentioned above, we get

\(EM_{2}(G)=\frac{1}{2}\left(108n^8-300n^7+128n^6+219n^5-179n^4-4n^3+36n^2-8n\right)\).

Ghalavand and Ashrafi in [17] established a complete set of relations between entire Zagreb indices with the Zagreb and reformulated Zagreb indices of graphs as follows;

Theorem 7. Let \(G\) be a graph with \(n\) vertices and \(m\) edges. Then \begin{equation*} \begin{split} M_{1}^{\varepsilon}(G)&=M_{1}(G)+EM_{1}(G),\\ M_{2}^{\varepsilon}(G)&=3M_{2}(G)+EM_{2}(G)+F(G)-2M_{1}(G). \end{split} \end{equation*} We now give the expressions for the entire Zagreb indices of the line graph of the subdivision graph of the friendship graph.

Theorem 8. Let \(G\) be the line graph of the subdivision graph of the friendship graph. Then \begin{equation*} \begin{split} M_{1}^{\varepsilon}(G)&=32n^4-32n^3+24n^2+24n,\\ M_{2}^{\varepsilon}(G)&=\frac{1}{2}\left(108n^8-300n^7+128n^6+219n^5-99n^4-60n^3+84n^2+64n\right). \end{split} \end{equation*}

Proof. Theorem 1, Proposition 1, and Theorems 5,6,7, give us the results.

3. Entire Zagreb indices of the line cut-vertex graph of the subdivision graph of the friendship graph

In this section we calculate the entire Zagreb indices of the line cut-vertex graph of the subdivision graph of the friendship graph.

Theorem 9. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \(M_1(G)=8n^{3}+12n^{2}+18n\) and \(M_{2}(G)=8n^{4}+12n^{3}+10n^{2}+15n\).

Proof. The line cut-vertex graph of \(S(F_{n})\) contains \(6n+1\) vertices, out of which \(2n\) vertices of are of degree \(2n+1\); \(4n\) vertices are of degree \(2\); and the remaining single vertex is of degree \(2n\). Thus \(M_1(G)=8n^{3}+12n^{2}+18n\).

Every \(L_{c}(S(F_{n}))\) contains exactly one copy of \(K_{2n+1}\) and \(5n\) edges. Thus the size of \((S(L_{c}(F_{n}))\) is \(|E(L_{c}(S(F_{n})| = \frac{4n^2+12n}{2}\). Out of these edges, \(3n\) edges whose end vertices are of degree 2; \(2n\) edges whose end vertices have degree \(2\) and \(2n+1\); \(n(2n-1)\) edges whose end vertices have degree \(2n+1\); and the remaining \(2n\) edges whose vertices have degree \(2n\) and \(2n+1\). Thus \(M_2(G)=8n^{4}+12n^{3}+10n^{2}+15n\).

We now give the expressions for the first and second Zagreb coindices of the line cut-vertex graph of the subdivision graph of the friendship graph using Theorem 2.

Theorem 10. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \(\overline{M_1}(G)=16n^3+60n^2-18n\).

Proof. The order and size of \(G\) are \(6n+1\) and \(2n^{2}+6n\), respectively. Then Theorem 9 and Theorem 2, give us the result.

Theorem 11. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \(\overline{M_2}(G)=32n^{3}+56n^{2}-24n\).

Proof. Theorem 9 and Theorem 2, give us the result.

We now find the forgotten index; and first and second reformulated Zagreb indices of the line cut-vertex graph of the subdivision graph of the friendship graph.

Proposition 2. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \(F(G)=16n^4+32n^3+12n^2+34n\).

Theorem 12. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \(EM_{1}(G)=32n^4+24n^3-8n^2+16n\).

Proof. The size of \(G\) is \(\frac{4n^2+12n}{2}\), out of which \(3n\) edges are of degree \(2\); \(2n\) edges are of degree \(2n+1\); \(n(2n-1)\) edges are of degree \(4n\); and the remaining \(2n\) edges are of degree \(4n-1\). Then \(EM_{1}(G)=32n^4+24n^3-8n^2+16n\).

Theorem 13. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \(EM_{2}(G)=64n^8-96n^7-48n^6+88n^5+104n^4-48n^3-12n^2+10n\).

Proof. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. We consider the following six cases:

Case \(1\): There are \(2n\) pairs of edges with degree \(2\). Then the second reformulated Zagreb index is \(8n\).

Case \(2\): There are \(2n\) pairs of edges with degree \(2\) and \(2n+1\). Then the second reformulated Zagreb index is \(8n^2+4n\).

Case \(3\): There are \(2n\) pairs of edges with degree \(2n+1\) and \(4n-1\). Then the second reformulated Zagreb index is \(16n^3+4n^2-2n\).

Case \(4\): There are \(4n^2-2n\) pairs of edges with degree \(2n+1\) and \(4n\). Then the second reformulated Zagreb index is \(32n^4-8n^2\).

Case \(5\): There are \(\frac{(2n^2-n)(2n^2-n-1)(2n^2-n-2)}{2}\) pairs of edges with degree \(4n\). Then the second reformulated Zagreb index is \((4n)^2\left(\frac{(2n^2-n)(2n^2-n-1)(2n^2-n-2)}{2}\right)\).

Case \(6\): There are \((4n^2-2n)\) pairs of edges with degree \(4n-1\) and \(4n\). Then the second reformulated Zagreb index is \(64n^4-48n^3+8n^2\).

From all the cases mentioned above, we get

\(EM_{2}(G)=64n^8-96n^7-48n^6+88n^5+104n^4-48n^3-12n^2+10n\).

We now give the expressions for the entire Zagreb indices of the line cut-vertex graph of the subdivision graph of the friendship graph.

Theorem 14. Let \(G\) be the line cut-vertex graph of the subdivision graph of the friendship graph. Then \begin{equation*} \begin{split} M_{1}^{\varepsilon}(G)&=32n^4+32n^3+4n^2+34n,\\ M_{2}^{\varepsilon}(G)&=64n^8-96n^7-48n^6+88n^5+144n^4+4n^3+6n^2+53n. \end{split} \end{equation*}

Proof. Theorem 9, Proposition 2, and Theorems 8,12,13, give us the results.

4. Conclusion

In this paper we have investigated the entire Zagreb indices of the line graph and line cut-vertex graph of the subdivision graph of the friendship graph. However, to determine the Zagreb indices and coindices of some other graph operators still remain open and challenging problem for researchers.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of interest

The authors declare no conflict of interest.

References

  1. Gutman, I., & Trinajstic, N. (1972). Graph theory and molecular orbitals. Total \(\phi\)-electron energy of alternant hydrocarbons. Chemical Physics Letters, 17(4), 535-538. [Google Scholor]
  2. Gutman, I., Rušcic, B., Trinajstic, N., & Wilcox, C. F. (1975). Graph theory and molecular orbitals. XII. Acyclic polyenes. The Journal of Chemical Physics, 62(9), 3399-3405. [Google Scholor]
  3. Gutman, I., & Das, K. C. (2004). The first Zagreb index 30 years after. MATCH Communications in Mathematical and in Computer Chemistry, 50(1), 83-92. [Google Scholor]
  4. Nikolic, S., Kovacevic, G., Milicevic, A., & Trinajstic, N. (2003). The Zagreb indices 30 years after. Croatica Chemica Acta, 76(2), 113-124.[Google Scholor]
  5. Das, K. C., & Gutman, I. (2004). Some properties of the second Zagreb index. MATCH Communications in Mathematical and in Computer Chemistry, 52(1), 103-112.[Google Scholor]
  6. Furtula, B., Gutman, I., & Dehmer, M. (2013). On structure-sensitivity of degree-based topological indices. Applied Mathematics and Computation, 219(17), 8973-8978. [Google Scholor]
  7. Gutman, I. (2013). Degree-based topological indices. Croatica Chemica Acta, 86(4), 351-361.[Google Scholor]
  8. Gutman, I., Furtula, B., Vukicevic, Z. K., & Popivoda, G. (2015). On Zagreb indices and coindices. MATCH Communications in Mathematical and in Computer Chemistry, 74(1), 5-16. [Google Scholor]
  9. Gutman, I., & Tošvic, J. (2013). Testing the quality of molecular structure descriptors. Vertex-degree-based topological indices. Journal of the Serbian Chemical Society, 78(6), 805-810. [Google Scholor]
  10. Došlic, T. (2008). Vertex-weighted Wiener polynomials for composite graphs. Ars Mathematica Contemporanea, 1(1), 66-80. [Google Scholor]
  11. Furtula, B., & Gutman, I. (2015). A forgotten topological index. Journal of Mathematical Chemistry, 53(4), 1184-1190. [Google Scholor]
  12. Milicevic, A., Nikolic, S., & Trinajstic, N. (2004). On reformulated Zagreb indices. Molecular Diversity, 8(4), 393-399. [Google Scholor]
  13. Alwardi, A., Alqesmah, A., Rangarajan, R., & Cangul, I. N. (2018). Entire Zagreb indices of graphs. Discrete Mathematics, Algorithms and Applications, 10(3), 1850037. [Google Scholor]
  14. Ranjini, P. S., Lokesha, V., & Cangül, I. N. (2011). On the Zagreb indices of the line graphs of the subdivision graphs. Applied Mathematics and Computation, 218(3), 699-702. [Google Scholor]
  15. Kulli, V. R. (1975). On lict and litact graph of a graph. Proceeding of the Indian National Science Academy, 41(3 Part A), 275-280. [Google Scholor]
  16. Harary, F. (1969). Graph Theory. Addison-Wesley, Reading, Mass. [Google Scholor]
  17. Ghalavand, A., & Ashrafi, A. R. (2019). Bounds on the entire Zagreb indices of graphs. MATCH Communications in Mathematical and in Computer Chemistry, 81, 371-381. [Google Scholor]
]]>
Towards understanding the mathematics of the \(2^{nd}\) law of thermodynamics https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/towards-understanding-the-mathematics-of-the-2nd-law-of-thermodynamics/ Mon, 21 Dec 2020 12:00:49 +0000 https://old.pisrt.org/?p=4810
OMS-Vol. 4 (2020), Issue 1, pp. 466 - 469 Open Access Full-Text PDF
Md. Shafiqul Islam
Abstract: In this paper, the mathematical formulation of \(2^{nd}\) law of thermodynamics has been explained, and the mathematical formulation of \(1^{st}\) law has been revisited from this noble perspective. It is not claimed that the \(2^{nd}\) law of thermodynamics is a redundant of the \(1^{st}\) law, rather I shown here how we can extract the mathematical formulation of the \(2^{nd}\) law from the mathematical formulation of the \(1^{st}\) law of thermodynamics. The Clausius statement of the \(2^{nd}\) law of thermodynamics is, it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir. An alternative statement of the law is, "All spontaneous processes are irreversible" or, "the entropy of an isolated system always increases". Having strong experimental evidences, this empirical law is obvious, which tells us the arrow of time and the direction of spontaneous changes.
]]>

Open Journal of Mathematical Sciences

Towards understanding the mathematics of the \(2^{nd}\) law of thermodynamics

Md. Shafiqul Islam
Department of Materials and Metallurgical Engineering, Bangladesh University of Engineering and Technology, Dhaka-1000, Bangladesh.; mdshafiqulislam@ug.mme.buet.ac.bd

Abstract

In this paper, the mathematical formulation of \(2^{nd}\) law of thermodynamics has been explained, and the mathematical formulation of \(1^{st}\) law has been revisited from this noble perspective. It is not claimed that the \(2^{nd}\) law of thermodynamics is a redundant of the \(1^{st}\) law, rather I shown here how we can extract the mathematical formulation of the \(2^{nd}\) law from the mathematical formulation of the \(1^{st}\) law of thermodynamics. The Clausius statement of the \(2^{nd}\) law of thermodynamics is, it is impossible to construct a device whose sole effect is the transfer of heat from a cool reservoir to a hot reservoir. An alternative statement of the law is, “All spontaneous processes are irreversible” or, “the entropy of an isolated system always increases”. Having strong experimental evidences, this empirical law is obvious, which tells us the arrow of time and the direction of spontaneous changes.

Keywords:

\(2^{nd}\) law of thermodynamics, entropy and disorder, irreversibility.

1. Introduction

The \(1^{st}\) law of thermodynamics is often claimed as a version of the law of conservation of energy, adapted to the thermodynamic systems, which can be formulated as:

\[\delta Q_{rev}=dU-\delta W_{rev},\] which states that the amount of heat accumulated by a closed system is spent to change its internal energy, and to do some work by the system on its surroundings. Here \(\delta{\mathrm{W}}_{\mathrm{rev}}\) is the differential work done by the surroundings on the system. So work done by the system on its surroundings is \((-\delta W_{rev})\), hence, \(\delta Q_{rev}=dU+p_{ext}dV\). This law is a very common-sense law, adapted to the thermodynamic system, which is the total energy of an isolated system e.g., the universe is conserved.

The aim of this paper is to show the mathematical formulation of the \(2^{nd}\) law of thermodynamics as a consequence of the \(1^{st}\) law.

2. Preliminaries

In the mathematical formulation of the \(1^{st}\) law of thermodynamics, the quantity of heat and work transfer are energies that depend on the process followed, whereas internal energy of a system is an extensive thermodynamic property of system describes quantitatively an equilibrium state irrespective of the process.

Here, \(\delta Q_{rev}\) and \(\delta W_{rev}\) are non-exact differentials and \(dU\) is an exact differential. Let \(X(U,V)\) is the integrating factor, so that \(X\delta Q_{rev}\) becomes exact, so, \(X\delta Q_{rev}=XdU+Xp_{ext}dV\) is an exact differential equation.

Definition 1. \(dZ = MdP + NdQ\) will be an exact differential equation provided that \(\left(\frac{\partial M}{\partial Q}\right)_P=\left(\frac{\partial N}{\partial P}\right)_Q\).

Assumption 1. Let us assume that the integrating factor \(X\) is a function of only internal energy \(U\). Joule experimentally shown that the internal energy of an ideal gas is only the function of its temperature, independent of pressure or volume. So, \(X = f(T)\). As, \(X\delta Q_{rev}=XdU+Xp_{ext}dV\) is an exact differential equation, so from Definition 1, we have \[\left(\frac{\partial X}{\partial V}\right)_U=\left(\frac{\partial Xp_{ext}}{\partial U}\right)_V.\] The above partial differential equation in general has more than one solution. The general solution can't be known as the Internal Energy as a function of temperature and the boundary conditions are not known. However, a particular solution for \(X\) can be obtained assuming \(X\), the integrating factor as a function of only internal energy \(U\).

From Assumption 1, we have \[X = f(T)\] \[\therefore \left(\frac{\partial X}{\partial V}\right)_U=0,\] i.e.,

\[\left(\frac{\partial X}{\partial V}\right)_U=\left(\frac{\partial Xp_{ext}}{\partial U}\right)_V=0,\] i.e., \[Xp_{ext} = \text{function of }\,\,V = g\left(\frac{T}{p_{ext}}\right)\] for ideal gas.

The functional equation \(f(T)p_{ext}=g\left(\frac{T}{p_{ext}}\right)\) has a unique solution \(f(y)=g(y)=\frac{1}{y}\), so, the integrating factor is \(X=f(T)=\frac{1}{T}\).

Now, \(\frac{\delta Q_{rev}}{T}\) is an exact differential. For reversible process, temperature of the system is same as the temperature of the surroundings at any particular instant of time, so, \(\frac{\delta Q_{rev}}{T_{surr}}\) is an exact differential and is known as differential change in entropy.

It can be shown that maximum work delivered to surroundings for isothermal gas expansion can be obtained using a reversible path [1], so,

\[(-W)_{irrev}< (-W)_{rev},\] i.e., \[W_{irrev}>W_{rev}\] and \[\Delta U=Q_{irrev}+W_{irrev}=Q_{rev}+W_{rev}\] so, \[Q_{rev}>Q_{irrev}.\] Being a state function,
\begin{eqnarray}\label{eq2.1} \oint \frac{\delta Q_{rev}}{T_{surr}}=0. \end{eqnarray}
(1)
But, \(Q_{irrev}< Q_{rev}\), so
\begin{eqnarray}\label{eq2.2} \oint \frac{\delta Q_{irrev}}{T_{surr}}< 0. \end{eqnarray}
(2)

3. Existence on global systems

The Equation (1) and the inequality (2) were proved by using the fact for ideal gas laws, but they holds for any materials in the system. The fact behind this is \(\delta Q_{rev}\) is reversible heat accumulated by the system from its surroundings and \(T_{surr}\) is the temperature of surroundings. So, the surroundings can't see inside the system and provide the same heat irrespective of material in the system for its same temperature at the instant of heat exit. Similarly, \(\delta Q_{irrev}\) is the irreversible heat accumulated by the system from its surroundings and \(T_{surr}\) is the temperature of surroundings at that instant which are independent of the system provided that heat transfer occurs irreversibly. The accumulation rate of heat may be different for different systems but that is the kinetics of the systems, which is beyond the scope of this paper.

Theorem 1. The 2nd Law of Thermodynamics is a consequence of the 1st Law of Thermodynamics.

The above argument proves the existence of the equation

\[\oint \frac{\delta Q}{T_{surr}}\leq 0\] for global systems, where the equality holds for reversible cases and for irreversible changes, the equality does not hold.

Figure 1. Closed path with a spontaneous and a reversible portion

Let a system (Figure 1) is isolated and spontaneously changes from \(A\) to \(B.\) The system is then brought into contact with a heat source of same temperature as \(B\) at that particular instant of time and reversibly brought back from \(B\) to \(A.\) (It would be necessary and sufficient to show that ''changes from \(A\) to \(B\) is irreversible'' to prove the above statement.)

Proof of Theorem 1. Let us first assume that spontaneous change from \(A\) to \(B\) is reversible, so, the total cyclic path is also reversible as all the elements of the path is reversible [2]. Hence

\[\oint \frac{\delta Q}{T_{surr}}=0 \ \Rightarrow \oint^{B}_{A}\frac{\delta Q_{spontaneous}}{T_{surr}}+\oint^{A}_{B}\frac{\delta Q_{rev}}{T_{surr}}=0.\] But \[\oint^{B}_{A}\frac{\delta Q_{spontaneous}}{T_{surr}}=0,\,\, \text{as}\,\,\, Q_{spontaneous}=0\,\, \text{(isolated)}.\] Therefore \[ \oint \frac{\delta Q}{T_{surr}}=\oint^{A}_{B}\frac{\delta Q_{rev}}{T_{surr}}=0,\] which is not true, because, \(Q_{rev}(B\rightarrow A)\neq 0\) (not isolated), so, \[\oint \frac{\delta Q}{T_{surr}} \neq 0.\] That is a contradiction, so, our initial assumption was incorrect, i.e., spontaneous change from \(A\) to \(B\) must be irreversible. Consequently, the cycle of heating and cooling back to the same point \(A\) is an irreversible cycle, as, minimum portion of this path is proved to be irreversible \((A\rightarrow B)\).

So,

\[\oint \frac{\delta Q}{T_{surr}}< 0 \ \Rightarrow \oint^{B}_{A}\frac{\delta Q_{spontaneous}}{T_{surr}}+\oint^{A}_{B}\frac{\delta Q_{rev}}{T_{surr}}< 0\ \Rightarrow \oint^{A}_{B}\frac{\delta Q_{rev}}{T_{surr}}< 0,\] because \[\oint^{B}_{A}\frac{\delta Q_{spontaneous}}{T_{surr}}=0, \  \  (isolated).\] So \[\oint^{B}_{A}\frac{\delta Q_{rev}}{T_{surr}}>0,\] which is the change in entropy for the spontaneous process indicating that the entropy always increases in isolated system for spontaneous processes to occur.

From the above argument it is proved that ``All spontaneous processes are irreversible''. Also ''the entropy of an isolated system always increases''.

4. Efficiency of heat engines

Let us consider a heat engine, in the first cycle it gains \(\Delta Q_1\) amount of heat from a hot reservoir and releases \(\Delta Q_2\) amount of heat to a cold reservoir. So, the amount of heat energy converted into useful work by the heat engine is \(\Delta Q_1-|\Delta Q_2|=\Delta Q_1+\Delta Q_2>0\).

The engine is said to be reversible if in the reversed cycle it can work as a heat pump, i.e., it takes \(\Delta Q_2\) amount of heat from the cold reservoir and releases \(\Delta Q_1\) amount of heat to the hot reservoir, provided that it consumes the useful work converted in the first cycle. That means, after completing two cycles, the engine is back to the old state, so, \(\frac{\Delta Q_1}{T_1}\) + \(\frac{\Delta Q_2}{T_2} = 0\).

But the equality does not hold for actual engines (irreversible), as we have proved that all spontaneous processes are irreversible. That means, after completing the two cycles, the engine is not back to the old state, so, \(\frac{\Delta Q_1}{T_1}\) + \(\frac{\Delta Q_2}{T_2} < 0\), implies \( 1 + \frac{T_1}{\Delta Q_1} \times \frac{\Delta Q_2}{T_2} < 0 \) implies \(\frac{\Delta Q_2}{\Delta Q_1} < -\frac{T_2}{T_1}\).

The efficiency of a heat engine is defined by the ratio of work done per cycle and the heat energy it gains per cycle, i.e., \(\frac{\Delta Q_1+\Delta Q_2}{\Delta Q_1} = 1 + \frac{\Delta Q_2}{\Delta Q_1}< 1-\frac{T_2}{T_1}\), which is clearly less than 100% as we know \(T_1,T_2\nless 0\) from the \(3^{rd}\) law of thermodynamics.

That is the evidence of the alternative formulation of the \(2^{nd}\) law of thermodynamics used in the literature, ''There does not exist any heat engine that does nothing but absorb heat energy from one single reservoir and convert it into work''.

5. Conclusion

In this paper, we proved the \(2^{nd}\) law of thermodynamics by mathematical arguments. Some familiar quotes used in the literature relevant to the \(2^{nd}\) law of thermodynamics, "you can't unscramble an egg", "you can't take the cream out of the coffee" as these are irreversible processes. No matter how long you wait, the cream won't jump out of the coffee into the creamer or could not travel in time back to its old state. These seem to be just natural phenomena, but we have shown the mathematical validation of these natural phenomena. So, the \(2^{nd}\) Law of Thermodynamics is not only a law of nature, but a law of mathematics also.

Conflicts of interest

The author declares no conflict of interest.

References

  1. Atkins, P. (2001). Physical chemistry. W. H. Freeman. [Google Scholor]
  2. Silbey, R. J. (2004). Physical chemistry. Wiley Global Education. [Google Scholor]
]]>
Measure of noncompactness for nonlinear Hilfer fractional differential equation with nonlocal Riemann–Liouville integral boundary conditions in Banach spaces https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/measure-of-noncompactness-for-nonlinear-hilfer-fractional-differential-equation-with-nonlocal-riemann-liouville-integral-boundary-conditions-in-banach-spaces/ Wed, 16 Dec 2020 12:27:34 +0000 https://old.pisrt.org/?p=4805
OMS-Vol. 4 (2020), Issue 1, pp. 456 - 465 Open Access Full-Text PDF
Abdelatif Boutiara, Maamar Benbachir, Kaddour Guerbati
Abstract: This paper investigates the existence results and uniqueness of solutions for a class of boundary value problems for fractional differential equations with the Hilfer fractional derivative. The reasoning is mainly based upon Mönch's fixed point theorem associated with the technique of measure of noncompactness. We illustrate our main findings, with a particular case example, included to show the applicability of our outcomes. The boundary conditions introduced in this work are of quite general nature and reduce to many special cases by fixing the parameters involved in the conditions.
]]>

Open Journal of Mathematical Sciences

Measure of noncompactness for nonlinear Hilfer fractional differential equation with nonlocal Riemann–Liouville integral boundary conditions in Banach spaces

Abdelatif Boutiara, Maamar Benbachir\(^1\), Kaddour Guerbati
Laboratoire de Math\'{e}matiques et Sciences Appliquées, University of Ghardaia, Algeria.; (A.B & K.G)
Faculty of Sciences, Saad Dahlab University, Blida, Algeria.; (N.B)
\(^{1}\)Corresponding Author: mbenbachir2001@gmail.com

Abstract

This paper investigates the existence results and uniqueness of solutions for a class of boundary value problems for fractional differential equations with the Hilfer fractional derivative. The reasoning is mainly based upon Mönch’s fixed point theorem associated with the technique of measure of noncompactness. We illustrate our main findings, with a particular case example, included to show the applicability of our outcomes. The boundary conditions introduced in this work are of quite general nature and reduce to many special cases by fixing the parameters involved in the conditions.

Keywords:

Fractional differential equation, Hilfer fractional derivative, nonlocal, Kuratowski measures of noncompactness, Mönch fixed point theorems, Banach space.

1. Introduction

Fractional differential equations have recently been applied in various areas of engineering, mathematics, physics and bio-engineering, and other applied sciences [1,2]. For some fundamental results in the theory of fractional calculus and fractional differential equations, we refer the reader to the monographs of Abbas, Benchohra and N’Guérékata [3], Samko, Kilbas and Marichev [4], Kilbas, Srivastava and Trujillo [5] and Zhou [6], the papers by Abbas et al., [7,8,9] and the references therein.

In 2000, a generalization of derivatives of both Riemann-Liouville and Caputo was given by Hilfer in [1] when he studied fractional time evolution in physical phenomena. He named it as generalized fractional derivative of order \(\alpha\in(0,1)\) and a type \(\beta\in[0,1]\) which can be reduced to the Riemann-Liouville and Caputo fractional derivatives when \(\beta=0\) and \(\beta=1\), respectively. Many authors call it the Hilfer fractional derivative. Such derivative interpolates between the Riemann-Liouville and Caputo derivative in some sense. Some properties and applications of the Hilfer derivative are given in [1,10] and references cited therein.

Recently, considerable attention has been given to the existence of solutions of initial and boundary value problems for fractional differential equations with Hilfer fractional derivative; see [1,2,10,11,12,13,14] and the references therein. In [15,16,17,18], the measure of noncompactness was applied to some classes of functional Riemann-Liouville or Caputo fractional differential equations in Banach spaces.

In this paper, we consider the existence of solutions of the following boundary value problem for a nonlinear fractional differential equation,

\begin{equation} \label{1} \left\{ \begin{array}{ll} D^{\alpha,\beta}_{0^{+}}y(t)=f(t,y(t)), & \hbox{\(t\in J:=[0,T]\);} \\ a_{1}I^{1-\gamma}y(0)+b_{1}I^{1-\gamma+q_{1}}y(\eta_{1})=\lambda_{1}, & \hbox{\(0< q_{1}\leq1\);} \\ a_{2}I^{1-\gamma}y(T)+b_{2}I^{1-\gamma+q_{2}}y(\eta_{2})=\lambda_{2}, & \hbox{\(0< q_{2}\leq1\).} \end{array} \right. \end{equation}
(1)
where \(D^{\alpha,\beta}_{0^{+}}\) is the Hilfer fractional derivative, where \(1 < \alpha\leq 2\) , \(0\leq\beta\leq1\) , \(0< \eta_{i}< T\) , i=1,2, \(\gamma=\alpha+\beta-\alpha\beta\). Let \(E\) is a reflexive Banach space with norm \(\|.\|\), \(f : J\times E\times E\times E\rightarrow E\) is given continuous function and satisfying some assumptions that will be specified later and \(a_{i}, b_{i}, \lambda_{i}, i=1,2\) are real constants.

The organization of this work is as follows; in Section 2, we introduce some notations, definitions, and lemmas that will be used later. Section 3 treats the existence of solutions in Banach spaces by using the Mönch's fixed point theorem combined with the technique of measures of noncompactness. In Section 4, we illustrate the obtained results by an example. Finally, the paper concludes with some interesting observations.

2. Preliminaries

In what follows we introduce definitions, notations, and preliminary facts which are used in the sequel. For more details, we refer to [1,4,5,19,20,21,22].

Let \(C(J,E)\) be the Banach space of continuous functions \(y : J\rightarrow E\), with the usual supremum norm

\[\|y\|_{\infty} = \sup\{\|y(t)\|, t \in J \},\] and \(L^{1}(J,E)\) be the Banach space of measurable functions \(y : J \rightarrow E\) which are Bochner integrable, equipped with the norm \[\|y\|_{L^{1}} =\int_{J} y(t) dt.\] Further, let \(AC^{1}(J,E)\) be the space of functions \(y : J \rightarrow E\), whose first derivative is absolutely continuous.

Definition 1. [23] Let \(J=[0,T]\) be a finite interval and \(1\leq\gamma< 2\). We introduce the weighted space \(C_{1-\gamma}(J,E)\) of continuous functions \(f\) on \((0,T]\) by \[C_{1-\gamma}(J,E)=\{f:(0,T] \rightarrow E: (t-a)^{1-\gamma}f(t)\in C(J,E)\}.\] In the space \(C_{1-\gamma}(J,E)\), we define the norm \[\|f\|_{C_{1-\gamma}}= \|(t-a)^{1-\gamma}f (t)\|_{C}.\]

Definition 2. [23] Let \(1< \alpha< 2, 0 \leq \beta \leq 1\). The weighted space \(C^{\alpha,\beta}_{1-\gamma}(J,E)\) is defined by \[C^{\alpha,\beta}_{1-\gamma}(J,E)=\{f:(0, T]\rightarrow \mathbb{R} : D^{\alpha,\beta}_{0^{+}}f\in C_{1-\gamma}(J,E)\}, \gamma=\alpha+\beta-\alpha\beta,\] and \[C^{1}_{1-\gamma}(J,E)=\{f:(0, T]\rightarrow \mathbb{R} : f'\in C_{1-\gamma}(J,E)\}, \gamma=\alpha+\beta-\alpha\beta,\] with the norm

\begin{equation} \label{} \|f\|_{C_{1-\gamma}^{1}}=\|f\|_{C}+ \|f'\|_{C_{1-\gamma}}. \end{equation}
(2)
Moreover, \(C_{1-\gamma}(J,E)\) is complete metric space of all continuous functions mapping \(J\) into \(E\) with the metric \(d\) defined by \[d(y_{1},y_{2})=\|y_{1}-y_{2}\|_{C_{1-\gamma}(J,E)}:=\max_{t\in J}|(t-a)^{1-\gamma}[y_{1}(t)-y_{2}(t)]| .\] For details, see [23].

Now, we give some results and properties of fractional calculus.

Definition 3. [24] Let \((0,T]\) and \(f : (0, \infty) \rightarrow \mathbb{R}\) is a real valued continuous function. The Riemann-Liouville fractional integral of a function \(f\) of order \(\alpha \in \mathbb{R^{+}}\) is denoted as \(I^{\alpha}_{0^{+}}f\) and defined by

\begin{equation} \label{p1} I^{\alpha}_{0^{+}}f(t) =\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s)ds, \;\;\; t>0, \end{equation}
(3)
where \(\Gamma(\alpha)\) is the Euler's Gamma function.

Definition 4. [5] Let \((0,T]\) and \(f : (0, \infty) \rightarrow \mathbb{R}\) is a real valued continuous function. The Riemann-Liouville fractional derivative of a function \(f\) of order \(\alpha \in \mathbb{R}^{+}_{0}=[0,+\infty)\) is denoted as \(D^{\alpha}_{0^{+}}f\) and defined by

\begin{equation} \label{p2} D^{\alpha}_{0^{+}}f (t) =\frac{1}{\Gamma(n-\alpha)}\frac{d^{n}}{dt^{n}}\int_{0}^{t}(t-s)^{n-\alpha-1}f(s)ds, \end{equation}
(4)
where \(n=[\alpha]+1\), and \([\alpha]\) means the integral part of \(\alpha\), provided the right hand side is pointwise defined on \((0,\infty)\).

Definition 5. [5] The Caputo fractional derivative of function \(f\) with order \(\alpha>0, n-1< \alpha< n, n\in\mathbb{N}\) is defined by

\begin{equation} \label{c9} ^{C}D^{\alpha}_{0^{+}}f (t) =\frac{1}{\Gamma(n-\alpha)}\int_{0}^{t}(t-s)^{n-\alpha-1}f^{(n)}(s)ds, \;\;\; t>0. \end{equation}
(5)
In [1], Hilfer studied applications of a generalized fractional operator having the Riemann-Liouville and Caputo derivatives as specific cases, (see also [2,10]).

Definition 6. [1] The Hilfer fractional derivative \(D^{\alpha,\beta}_{0^{+}}\) of order \(\alpha\) \((n-1< \alpha< n)\) and type \(\beta\) \((0\leq\beta\leq 1)\) is defined by

\begin{equation} \label{p3} D^{\alpha,\beta}_{0^{+}}=I^{\beta(n-\alpha)}_{0^{+}}D^{n}I^{(1-\beta)(n-\alpha)}_{0^{+}}f(t), \end{equation}
(6)
where \(I^{\alpha}_{0^{+}}\) and \(D^{\alpha}_{0^{+}}\) are Riemann-Liouville fractional integral and derivative defined by (3) and (4), respectively.

Remark 1. ([25]) Hilfer fractional derivative interpolates between the Riemann-Liouville ((4), if \(\beta=0\)) and Caputo ((5), if \(\beta=1\)) fractional derivatives since \begin{equation} D_{0^{+}}^{\alpha,0}= ^{R-L}D^{\alpha}_{0^{+}} \  \  and \  \  D^{\alpha,1}= ^{C}D^{\alpha}_{0^{+}}. \end{equation}

Lemma 1. Let \(1< \alpha< 2\), \(0\leq\beta\leq1\), \(\gamma =\alpha +\beta-\alpha\beta\), and \(f\in L^{1}(J,E)\). The operator \(D^{\alpha,\beta}_{0^{+}}\) can be written as \begin{align*} D^{\alpha,\beta}_{0^{+}}f(t) & =\left(I^{\beta(1-\alpha)}_{0^{+}}\frac{d}{dt}I^{(1-\gamma)}_{0^{+}}f\right)(t)=I^{\beta(1-\alpha)}_{0^{+}}D^{\gamma}f(t), \  \  t\in J. \end{align*}

Lemma 2. Let \(1< \alpha< 2\), \(0\leq\beta\leq1\) and \(\gamma=\alpha+\beta-\alpha\beta\). If \(D^{\beta(1-\alpha)}_{0^{+}}f\) exists and is in \(L^{1}(J,E)\), then \begin{equation} D^{\alpha,\beta}_{0^{+}}I^{\alpha}_{0^{+}}f(t)=I^{\beta(1-\alpha)}_{0^{+}}D^{\beta(1-\alpha)}_{0^{+}}f(t), \  \  \  \  t \in J. \end{equation} Furthermore, if \(f \in C_{1-\gamma}(J,E)\) and \(I^{1-\beta(1-\alpha)}_{0^{+}}f\in C^{1}_{1-\gamma}(J,E)\), then \begin{equation} D^{\alpha,\beta}_{0^{+}}I^{\alpha}_{0^{+}}f(t)=f(t), \  \  \  \  t\in J. \end{equation}

Lemma 3. Let \(1< \alpha< 2\), \(0\leq\beta\leq1\), \(\gamma =\alpha +\beta-\alpha\beta\), and \(f \in L^{1}(J,E)\). If \(D^{\gamma}_{0^{+}}f\) exists and is in \(L^{1}(J,E)\), then \begin{align*} I^{\alpha}_{0^{+}}D^{\alpha,\beta}_{0^{+}}f(t)&=I^{\gamma}_{0^{+}}D^{\gamma}_{0^{+}}f(t)=f(t)-\frac{I^{1-\gamma}_{0^{+}}f (0^{+})}{\Gamma(\gamma)}t^{\gamma-1}, \  \  \  \  t\in J. \end{align*}

Lemma 4. [5] For \(t > a\), we have

\begin{gather}\label{L1} \begin{cases} I^{\alpha}_{0^{+}}(t-a)^{\beta-1}(t)&=\frac{\Gamma(\beta)}{\Gamma(\beta-\alpha)}(t-a)^{\beta+\alpha-1},\\ D^{\alpha}_{0^{+}}(t-a)^{\beta-1}(t)&=\frac{\Gamma(\beta)}{\Gamma(\beta-\alpha)}(t-a)^{\beta-\alpha-1}. \end{cases} \end{gather}
(7)

Lemma 5. Let \(\alpha > 0\) and \(0 \leq \beta \leq 1\). Then the homogeneous differential equation with Hilfer fractional order

\begin{equation} \label{E1} D^{\alpha,\beta}_{0^{+}}h(t)=0 \end{equation}
(8)
has a solution \begin{equation*} h(t)=c_{0}t^{\gamma-1}+c_{1}t^{\gamma+2\beta-2}+c_{2}t^{\gamma+2(2\beta)-3}+ ... +c_{n}t^{\gamma+n(2\beta)-(n+1)}. \end{equation*}

Notation 1. For a given set \(V\) of functions \(v : J\rightarrow E\), let us denote by \[V (t) = \{v(t) : v \in V \}, t \in J,\] and \[V (J ) = \{v(t) : v \in V, t \in J \}.\]

Definition 7. A map \(f : J \times E\rightarrow E\) is said to be Caratheodory if

  • (i) \(t \mapsto f(t,u)\) is measurable for each \(u \in E\);
  • (ii) \(u \mapsto F(t,u)\) is continuous for almost all \(t \in J\).
For convenience, we recall the definitions of the Kuratowski measure of noncompactness and summarize the main properties of this measure.

Definition 8. ([16,19]). Let \(E\) be a Banach space and \(\Omega_{E}\) the bounded subsets of \(E\). The Kuratowski measure of noncompactness is the map \(\mu : \Omega_{E} \rightarrow [0, \infty]\) defined by \begin{equation} \mu(B) = \inf \{\epsilon> 0 : B \subseteq \cup^{n}_{i=1}B_{i}\  \  and \  \  diam(B_{i}) \leq \epsilon \};\  \  here \  \  B \in \Omega_{E}. \end{equation} This measure of noncompactness satisfies following important properties [16,19]:

  • (a) \(\mu(B) = 0 \Leftrightarrow \overline{B}\) is compact (\(B\) is relatively compact).
  • (b) \(\mu(B) = \mu(\overline{B}).\)
  • (c) \(A\subset B \Rightarrow \mu(A) \leq \mu(B).\)
  • (d) \(\mu(A + B) \leq \mu(A) + \mu(B)\).
  • (e) \(\mu(cB ) = |c|\mu(B); c \in \mathbb{R}.\)
  • (f) \(\mu(conv B ) = \mu(B).\)
Let us now recall Mönch's fixed point theorem and an important lemma.

Theorem 1. ([15,22]). Let \(D\) be a bounded, closed and convex subset of a Banach space such that \(0\in D\), and let \(N\) be a continuous mapping of \(D\) into itself. If the implication

\begin{equation} \label{imp} V=\overline{conv} N(V) \hbox{ or } V=N(V)\cup \{0\} \Rightarrow \mu(V)=0 \end{equation}
(9)
holds for every subset \(V\) of \(D\), then \(N\) has a fixed point.

Lemma 6. ([22]). Let \(D\) be a bounded, closed and convex subset of the Banach space \(C(J,E)\), \("G"\) a continuous function on \(J\times J\) and \("f"\) a function from \(J\times E\longrightarrow E\) which satisfies the Caratheodory conditions, and suppose there exists \(p\in L^{1}(J,\mathbb{R^{+}})\) such that, for each \(t\in J\). Then for each bounded set \(B \subset E\), we have \begin{equation} \lim_{h\rightarrow 0^{+}}\mu(f(J_{t,h}\times B)) \leq p(t)\mu(B);\  \  here \  \  J_{t,h}=[t-h,t] \cap J. \end{equation} If \(V\) is an equicontinuous subset of \(D\), then \[\mu\left(\left\{\int_{J}G(s, t)f(s,y(s))ds : y \in V\right\}\right) \leq\int_{J}\|G(t, s)\|p(s)\mu(V(s))ds.\]

3. Main results

Let us start by defining what we meant by a solution of Problem (1).

Definition 9. A function \(y \in C_{1-\gamma}(J,E)\) is said to be a solution of the Problem (1) if \(y\) satisfies the equation \(D^{\alpha,\beta}_{0^{+}}y(t)=f(t,y(t))\) on \(J\), and the conditions \( a_{1}I^{1-\gamma}y(0)+b_{1}I^{1-\gamma+q_{1}}y(\eta_{1})=\lambda_{1}\) and \(a_{2}I^{1-\gamma}y(T)+b_{2}I^{1-\gamma+q_{2}}y(\eta_{2})=\lambda_{2}\) .

Lemma 7. Let \(f : J \times E\times E\times E\rightarrow E\) be a function such that \(f \in C_{1-\gamma}(J,E)\) for any \(y \in C_{1-\gamma}(J,E)\). Then the unique solution of the linear Hilfer fractional boundary value problem

\begin{equation} \label{E3} D^{\alpha,\beta}_{0^{+}}y(t)=f(t,y(t)), t\in J:=[0,T], \end{equation}
(10)
with boundary conditions
\begin{gather}\label{E31} \begin{cases} a_{1}I^{1-\gamma}y(0)+b_{1}I^{1-\gamma+q_{1}}y(\eta_{1})&=\lambda_{1},\\ a_{2}I^{1-\gamma}y(T)+b_{2}I^{1-\gamma+q_{2}}y(\eta_{2})&=\lambda_{2}, \gamma=\alpha+\beta-\alpha\beta. \end{cases} \end{gather}
(11)
is given by
\begin{align}\label{E2} y(t)&=I^{\alpha}f(t,y(t))+\frac{t^{\gamma-1}}{w}\left[(w_{4}\lambda_{1}-w_{2}\lambda_{2})-w_{4}b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1}))\right. \notag\\ &\;\;\left.+w_{2}\left(a_{2}I^{\alpha-\gamma+1}f(T,y(T))+b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2}))\right)\right]\notag\\ &\;\;+\frac{t^{\gamma+2\beta-2}}{w}\left[(w_{1}\lambda_{2}-w_{3}\lambda_{1})+w_{3}b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1}))\right.\notag\\ &\;\;\left.-w_{1}\left(a_{2}I^{\alpha-\gamma+1}f(T,y(T))+b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2}))\right)\right]\notag\\ &=I^{\alpha}f(t,y(t))+\frac{(w_{3}t^{\gamma+2\beta-2}-w_{4}t^{\gamma-1})}{w}b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1})) +\frac{t^{\gamma-1}}{w}(w_{4}\lambda_{1}-w_{2}\lambda_{2})\notag\\ &\;\;+\frac{(w_{2}t^{\gamma-1}-w_{1}t^{\gamma+2\beta-2})}{w}\left(a_{2}I^{\alpha-\gamma+1}f(T,y(T))+b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2}))\right)\notag\\ &\;\;+\frac{t^{\gamma+2\beta-2}}{w}(w_{1}\lambda_{2}-w_{3}\lambda_{1}), \end{align}
(12)
where
\begin{gather} \begin{cases} w_{1}& =\Gamma(\gamma)\left(a_{1}+b_{1}\frac{\eta_{1}^{q_{1}}}{\Gamma(q_{1}+1)}\right), \\ w_{2}& =\frac{\Gamma(\gamma+2\beta-1)}{\Gamma(2\beta+q_{1})}\eta_{1}^{2\beta+q_{1}-1},\\ w_{3}& =\Gamma(\gamma)\left(a_{2}+b_{2}\frac{\eta_{2}^{q_{2}}}{\Gamma(q_{2}+1)}\right), \\ w_{4}& =\frac{\Gamma(\gamma+2\beta-1)}{\Gamma(2\beta)}\left(b_{2}\eta_{2}^{2\beta+q_{2}-1}+a_{2}T^{2\beta-1}\right),\\ w& =w_{1}w_{4}-w_{2}w_{3}, \  \  with \  \  w\neq0. \end{cases} \end{gather}
(13)

Proof. Assume \(y\) satisfies (12), then Lemma 5 implies that

\begin{equation} \label{Ee} y(t)=c_{1}t^{\gamma-1}+c_{2}t^{\gamma+2\beta-2}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}f(s,y(s))ds, \end{equation}
(14)
for some constants \( c_{1}, c_{2}\in \mathbb{R}\). Applying the boundary conditions (11) in (14), we obtain \begin{align*} &I^{1-\gamma}y(t)=I^{\alpha-\gamma+1}f(t,y(t))+c_{1}\Gamma(\gamma)+c_{2}\frac{\Gamma(\gamma+2\beta-1)}{\Gamma(2\beta)}t^{2\beta-1},\\ &I^{1-\gamma}y(0)=c_{1}\Gamma(\gamma),\\ &I^{1-\gamma}y(T)=I^{\alpha-\gamma+1}f(T,y(T))+c_{1}\frac{\Gamma(\gamma)}{\Gamma(q_{i}+1)}+c_{2}\frac{\Gamma(\gamma+2\beta-1)}{\Gamma(2\beta)}T^{2\beta-1},\\ &I^{1-\gamma+q_{i}}y(\eta_{i})=I^{\alpha-\gamma+q_{i}+1}f(\eta_{i},y(\eta_{i}))+c_{1}\Gamma(\gamma) +c_{2}\frac{\Gamma(\gamma+2\beta-1)}{\Gamma(2\beta+q_{i})}\eta_{i}^{2\beta+q_{i}-1}, i=1,2. \end{align*} After collecting the similar terms in one part and by using (13), we have the following equations
\begin{gather}\label{1.} \begin{cases} c_{1}w_{1}+c_{2}w_{2}&=\lambda_{1}-b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1})),\\ c_{1}w_{3}+c_{2}w_{4}&=\lambda_{2}-a_{2}I^{\alpha-\gamma+1}f(T,y(T))-b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2})). \end{cases} \end{gather}
(15)
Solving (15), we find that \begin{equation*} c_{1}=\frac{1}{w}\left[(w_{4}\lambda_{1}-w_{2}\lambda_{2})-w_{4}b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1})) +w_{2}\left(a_{2}I^{\alpha-\gamma+1}f(T,y(T))+b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2}))\right)\right], \end{equation*} and \begin{equation*} c_{2}=\frac{1}{w}\left[(w_{1}\lambda_{2}-w_{3}\lambda_{1})+w_{3}b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1})) -w_{1}\left(a_{2}I^{\alpha-\gamma+1}f(T,y(T))+b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2}))\right)\right]. \end{equation*} Substituting the value of \(c_{1}, c_{2}\) in (14), we get (12).

In order to present and prove our main results, we consider the following theorem:

Theorem 2. Assume that the following conditions hold:

  • (H1) \(f : J \times E\rightarrow E\) satisfies the Caratheodory conditions;
  • (H2) There exists \(p \in L^{1}(J, \mathbb{R^{+}})\), such that, \(\|f(t,y)\| \leq p(t)\|y\|\), for \(t\in J\) and each \(y\in E;\)
  • (H3) For each \(t\in J\) and each bounded set \(B\subset E\), we have \(\lim_{h\rightarrow0^{+}}\mu(f(J_{t,h}\times B)) \leq t^{1-\gamma}p(t)\mu(B)\); here \(J_{t,h}= [t-h, t] \cap J\);
  • (H4) There exists a constant \(R>0\) such that
    \begin{equation} \label{000} R\geq\frac{K}{(1-p^{*}L)}, \end{equation}
    (16)
    where \( L=\frac{T^{\alpha-\gamma+1}}{\Gamma(\alpha+1)} +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{w\Gamma(\alpha-\gamma+q_{1}+2)}\eta_{1}^{\alpha-\gamma+q_{1}+1} +\frac{|a_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+2)}T^{\alpha-\gamma+1} +\frac{|b_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+q_{2}+2)}\eta_{2}^{\alpha-\gamma+q_{2}+1}. \)

    and

    \( K=\frac{T^{2\beta-1}(|w_{1}\lambda_{2}|+|w_{3}\lambda_{1}|)+(|w_{4}\lambda_{1}|+|w_{2}\lambda_{2}|)}{|w|}. \)

Now, we shall prove the following theorem concerning the existence of solutions of (1). Let \(p^{*}=\sup_{t\in J}p(t).\)

Theorem 3. Assume that the hypotheses (H1)-(H3) hold. If

\begin{equation} \label{99.} p^{*} L < 1, \end{equation}
(17)
then (1) has at least one solution defined on \(J\).

Proof. Transform the Problem (1) into a fixed point problem. Consider the operator \(\aleph:C_{1-\gamma}(J,E)\rightarrow C_{1-\gamma}(J,E)\) defined by

\begin{align}\label{0001} &\aleph(y)(t)= I^{\alpha}f(t,y(t))+\frac{(w_{3}t^{\gamma+2\beta-2}-w_{4}t^{\gamma-1})}{w}b_{1}I^{\alpha-\gamma+q_{1}+1}f(\eta_{1},y(\eta_{1})) +\frac{t^{\gamma-1}}{w}(w_{4}\lambda_{1}-w_{2}\lambda_{2})\notag\\ &\;\;+\frac{(w_{2}t^{\gamma-1}-w_{1}t^{\gamma+2\beta-2})}{w}\left(a_{2}I^{\alpha-\gamma+1}f(T,y(T))+b_{2}I^{\alpha-\gamma+q_{2}+1}f(\eta_{2},y(\eta_{2}))\right) +\frac{t^{\gamma+2\beta-2}}{w}(w_{1}\lambda_{2}-w_{3}\lambda_{1}). \end{align}
(18)
Clearly, the fixed points of the operator \(\aleph\) are solutions of the Problem (1).

Take

\[D=\left\{ y\in C_{1-\gamma}(J,E) : \|y\|\leq R \right\},\] where \(R\) satisfies inequality (16). Notice that the subset \(D\) is closed, convex, and equicontinuous. We shall show that the operator \(\aleph\) satisfies all the assumptions of Mönch's fixed point theorem. The proof will be given in three steps.

Step 1. \(\aleph\) is continuous.

Let \({y_{n}}\) be a sequence such that \(y_{n} \rightarrow y\) in \(C_{1-\gamma}(J, E )\). Then for each \(t \in J\) ,

\begin{align*} & \|t^{1-\gamma}(\aleph(y_{n})(t)-\aleph(y)(t))\|\leq \frac{t^{1-\gamma}}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}\|f(s,y_{n}(s))-f(s, y (s))\|ds\\ &+\frac{|b_{1}|(|w_{3}|t^{2\beta-1}+|w_{4}|)}{|w|\Gamma(\alpha-\gamma+q_{1}+1)}\int_{0}^{\eta_{1}}(\eta_{1}-s)^{\alpha-\gamma+q_{1}}\|f(s,y_{n}(s))-f(s, y (s))\|ds \\ &+\frac{|a_{2}|(|w_{2}|+|w_{1}|t^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+1)}\int_{0}^{T}(T-s)^{\alpha-\gamma}\|f(s,y_{n}(s))-f(s, y (s))\|ds\\ &+\frac{|b_{2}|(|w_{2}|+|w_{1}|t^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+q_{2}+1)}\int_{0}^{\eta_{2}}(\eta_{2}-s)^{\alpha-\gamma+q_{2}}\|f(s,y_{n}(s))-f(s,y(s))\|ds\\ &\leq \left\{\frac{T^{\alpha-\gamma+1}}{\Gamma(\alpha+1)} +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{|w|\Gamma(\alpha-\gamma+q_{1}+2)}\eta_{1}^{\alpha-\gamma+q_{1}+1} +\frac{|a_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+2)}T^{\alpha-\gamma+1}\right.\\ &\left.+\frac{|b_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+q_{2}+2)}\eta_{2}^{\alpha-\gamma+q_{2}+1}\right\}\|f(s,y_{n}(s))-f(s,y(s))\|. \end{align*} Since \(f\) is of Caratheodory type, then by the Lebesgue dominated convergence theorem, we have \[\|\aleph(y_{n})-\aleph(y)\|_{\infty}\rightarrow 0\;\;\; \text{as}\;\;\;n \rightarrow \infty.\]

Step 2. We show that \(\aleph\) maps \(D\) into \(D\).

Take \(y \in D\), \(t \in J\) and assume that \(\aleph y(t)\neq0\).,

\begin{align*} &\|t^{1-\gamma}(\aleph y)(t)\|\leq t^{1-\gamma}\left[I^{\alpha}f(s,y(s))(t) +\frac{(|w_{3}|t^{\gamma+2\beta-2}+|w_{4}|t^{\gamma-1})}{|w|}|b_{1}|I^{\alpha-\gamma+q_{1}+1}f(s,y(s))(\eta_{1})\right.\\ &+\frac{(|w_{2}|t^{\gamma-1}+|w_{1}|t^{\gamma+2\beta-2})}{|w|}\left(|a_{2}|I^{\alpha-\gamma+1}f(s,y(s))(T)+|b_{2}|I^{\alpha-\gamma+q_{2}+1}f(s,y(s))(\eta_{2})\right)\\ &\left.+\frac{t^{\gamma-1}}{|w|}(|w_{4}\lambda_{1}|+|w_{2}\lambda_{2}|)+\frac{t^{\gamma+2\beta-2}}{|w|}(|w_{1}\lambda_{2}|+|w_{3}\lambda_{1}|)\right]\\ &\leq \left[t^{1-\gamma}I^{\alpha}|f(s,y(s))(t)| +\frac{|b_{1}|(|w_{3}|t^{2\beta-1}+|w_{4}|)}{|w|}I^{\alpha-\gamma+q_{1}+1}|f(s,y(s))(\eta_{1})|\right.\\ &+\frac{(|w_{2}|+|w_{1}|t^{2\beta-1})}{|w|}\left(|a_{2}|I^{\alpha-\gamma+1}|f(s,y(s))(T)| +|b_{2}|I^{\alpha-\gamma+q_{2}+1}|f(s,y(s))(\eta_{2})|\right)\\ &\left.+\frac{(|w_{4}\lambda_{1}|+|w_{2}\lambda_{2}|)}{|w|}+\frac{t^{2\beta-1}}{|w|}(|w_{1}\lambda_{2}|+|w_{3}\lambda_{1}|)\right]\\ &\leq \left[T^{1-\gamma}I^{\alpha}\|y\|p(s)(T) +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{|w|}I^{\alpha-\gamma+q_{1}+1}\|y\|p(s)(\eta_{1})\right.\\ &\left.+\frac{(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|}\left(|a_{2}|I^{\alpha-\gamma+1}p(s)(T) +|b_{2}|I^{\alpha-\gamma+q_{2}+1}\|y\|p(s)(\eta_{2})\right)\right]\\ &+\frac{T^{2\beta-1}(|w_{1}\lambda_{2}|+|w_{3}\lambda_{1}|)+(|w_{4}\lambda_{1}|+|w_{2}\lambda_{2}|)}{|w|}\\ &\leq p^{*}R\left[\frac{T^{\alpha-\gamma+1}}{\Gamma(\alpha+1)} +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{|w|\Gamma(\alpha-\gamma+q_{1}+2)}\eta_{1}^{\alpha-\gamma+q_{1}+1} +\frac{|a_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+2)}T^{\alpha-\gamma+1}\right.\\ &\left.+\frac{|b_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+q_{2}+2)}\eta_{2}^{\alpha-\gamma+q_{2}+1} \right]+\frac{T^{2\beta-1}(|w_{1}\lambda_{2}-w_{3}\lambda_{1}|)+(|w_{4}\lambda_{1}|+|w_{2}\lambda_{2}|)}{|w|}\\ &=p^{*}R L +\frac{T^{2\beta-1}(|w_{1}\lambda_{2}|+|w_{3}\lambda_{1}|)+(|w_{4}\lambda_{1}|+|w_{2}\lambda_{2}|)}{|w|}\leq R. \end{align*} Next, we show that \(\aleph(D)\) is equicontinuous. By Step 2, it is obvious that \(\aleph(D)\subset C_{1-\gamma}(J, E )\) is bounded. For the equicontinuity of \(\aleph(D)\), let \(t_{1}, t_{2}\in J\) , \(t_{1}< t_{2}\) and \(y\in D\), so \(t_{2}^{1-\gamma}\aleph y(t_{2})-t_{1}^{1-\gamma}\aleph y(t_{1})\neq0\). Hence, \begin{align*} & \|t_{2}^{1-\gamma}\aleph y(t_{2})-t_{1}^{1-\gamma}\aleph y(t_{1})\|\leq I^{\alpha}(t_{2}^{1-\gamma}f(s,x(s))(t_{2})-t_{1}^{1-\gamma}f(s,x(s))(t_{1})\\ &+\frac{|b_{1}w_{3}|(t_{2}^{2\beta-1}-t_{1}^{2\beta-1})}{|w|}I^{\alpha-\gamma+q_{1}+1}f(s,y(s))(\eta_{1})+|w_{1}|\frac{(t_{1}^{2\beta-1}-|t_{2}^{2\beta-1})}{|w|}\left(|a_{2}|I^{\alpha-\gamma+1}f(s,y(s))(T)\right.\\ &\left.+|b_{2}|I^{\alpha-\gamma+q_{2}+1}f(s,y(s))(\eta_{2})\right) +\frac{t_{2}^{2\beta-1}-t_{1}^{2\beta-1}}{|w|}(|w_{1}\lambda_{2}-w_{3}\lambda_{1}|)\\ &\leq\frac{p^{*}R}{\Gamma(\alpha)}\left[t_{2}^{1-\gamma}\int_{0}^{t_{1}}(t_{2}-s)^{\alpha-1}ds -t_{1}^{1-\gamma}\int_{0}^{t_{1}}(t_{1}-s)^{\alpha-1}ds\right.\left.+t_{2}^{1-\gamma}\int_{t_{1}}^{t_{2}}(t_{2}-s)^{\alpha-1}ds\right]\\ &+p^{*}R\left[\frac{|w_{3}b_{1}|(t_{2}^{2\beta-1}-t_{1}^{2\beta-1})}{|w|}I^{\alpha-\gamma+q_{1}+1}(1)(\eta_{1})\right.\\ &\left.+\frac{|w_{1}|(t_{1}^{2\beta-1}-t_{2}^{2\beta-1})}{|w|}\left(|a_{2}|I^{\alpha-\gamma+1}(1)(T) +|b_{2}|I^{\alpha-\gamma+q_{2}+1}(1)(\eta_{2})\right)\right] +\frac{t_{2}^{2\beta-1}-t_{1}^{2\beta-1}}{|w|}(|w_{1}\lambda_{2}-w_{3}\lambda_{1}|)\\ &\leq p^{*}R\left[\frac{(t_{2}^{\alpha-\gamma+1}-t_{1}^{\alpha-\gamma+1})}{\Gamma(\alpha+1)} +\frac{|b_{1}w_{3}|(t_{2}^{2\beta-1}-t_{1}^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+q_{1}+2)}\eta_{1}^{\alpha-\gamma+q_{1}+1}\right.\\ &\left.+\frac{|w_{1}|(t_{1}^{2\beta-1}-t_{2}^{2\beta-1})}{|w|}\left(\frac{|a_{2}|T^{\alpha-\gamma+1}}{\Gamma(\alpha-\gamma+2)} +\frac{|b_{2}|\eta_{2}^{\alpha-\gamma+q_{2}+1}}{\Gamma(\alpha-\gamma+q_{2}+2)}\right)\right]+\frac{t_{2}^{2\beta-1}-t_{1}^{2\beta-1}}{|w|}(|w_{1}\lambda_{2}-w_{3}\lambda_{1}|). \end{align*} As \(t_{1}\rightarrow t_{2}\), the right hand side of the above inequality tends to zero. Hence \(\aleph(D)\subset D\).

Step 3. The implication (9) holds.

Now let \(V\) be a bounded and equicontinuous subset of \(D\). Hence \(t\mapsto v(t)=\mu(V(t))\) is continuous on \(J\) such that \(V\subset \overline{conv}({0}\cup \aleph(V))\). Clearly, \(V(t)\subset \overline{conv}(\{0\}\cup \aleph(V))\) for all \(t\in J\) . Hence \(\aleph V(t)\subset \aleph D(t)\), \(t\in J\) is bounded in \(E\) . By assumption (H3), and the properties of measure \(\mu\) , we have, for each \(t\in J\),

\begin{align*} t^{1-\gamma}v(t)&\leq \mu(t^{1-\gamma}N(V)(t)\cup \{0\})) \leq \mu(t^{1-\gamma}(NV)(t))\\ &\leq \mu\left\{t^{1-\gamma}\left[I^{\alpha}f(t,V(t))+\frac{(w_{3}t^{\gamma+2\beta-2}-w_{4}t^{\gamma-1})}{w}b_{1}I^{\alpha-\gamma+q_{1}+1}f(s,V(s))(\eta_{1}) +\frac{t^{\gamma-1}}{w}(w_{4}\lambda_{1}-w_{2}\lambda_{2})\right.\right.\\ &\;\;+\frac{a_{2}(w_{2}t^{\gamma-1}-w_{1}t^{\gamma+2\beta-2})}{w}I^{\alpha-\gamma+1}f(s,V(s))(T) +\frac{b_{2}(w_{2}t^{\gamma-1}-w_{1}t^{\gamma+2\beta-2})}{w}I^{\alpha-\gamma+q_{2}+1}f(s,V(s))(\eta_{2})\\ &\;\;\left.\left.+\frac{t^{\gamma+2\beta-2}}{w}(w_{1}\lambda_{2}-w_{3}\lambda_{1})\right]\right\}\\ &\leq t^{1-\gamma}I^{\alpha}\mu\left(f(s,V(s))\right)(t) +\frac{|b_{1}|(|w_{3}|t^{2\beta-1}+|w_{4}|)}{|w|}I^{\alpha-\gamma+q_{1}+1}\mu\left(f(s,V(s))\right)(\eta_{1})\\ &\;\;+\frac{|a_{2}|(|w_{2}|+|w_{1}|t^{2\beta-1})}{|w|}I^{\alpha-\gamma+1}\mu\left(f(s,V(s))\right)(T) +\frac{|b_{2}|(|w_{2}|+|w_{1}|t^{2\beta-1})}{|w|}I^{\alpha-\gamma+q_{2}+1}\mu\left(f(s,V(s))\right)(\eta_{2})\\ &\leq t^{1-\gamma}I^{\alpha}\left(p(s)v(s)\right)(t) +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{|w|}I^{\alpha-\gamma+q_{1}+1}\left(p(s)v(s)\right)(\eta_{1})\\ &\;\;+\frac{|a_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|}I^{\alpha-\gamma+1}\left(p(s)v(s)\right)(T) +\frac{|b_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|}I^{\alpha-\gamma+q_{2}+1}\left(p(s)v(s)\right)(\eta_{2})\\ &\leq p^{*}\|v\|_{\infty}\left[T^{1-\gamma}I^{\alpha}\left(1\right)(T) +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{|w|}I^{\alpha-\gamma+q_{1}+1}\left(1\right)(\eta_{1})\right.\\ &\;\;\left.+\frac{|a_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|}I^{\alpha-\gamma+1}\left(1\right)(T) +\frac{|b_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|}I^{\alpha-\gamma+q_{2}+1}\left(1\right)(\eta_{2})\right]\\ &\leq p^{*}\|v\|_{\infty}\left[\frac{T^{\alpha-\gamma+1}}{\Gamma(\alpha+1)} +\frac{|b_{1}|(|w_{3}|T^{2\beta-1}+|w_{4}|)}{|w|\Gamma(\alpha-\gamma+q_{1}+2)}\eta_{1}^{\alpha-\gamma+q_{1}+1} +\frac{|a_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+2)}T^{\alpha-\gamma+1}\right.\\ &\;\;\left.+\frac{|b_{2}|(|w_{2}|+|w_{1}|T^{2\beta-1})}{|w|\Gamma(\alpha-\gamma+q_{2}+2)}\eta_{2}^{\alpha-\gamma+q_{2}+1}\right]\\ &= p^{*}\|v\|_{\infty} L. \end{align*} which gives \( \|v\|_{\infty} (1-p^{*} L)\leq 0. \) From (17), we get \(\|v\|=0\), that is, \(v(t)=\mu(V(t))=0\), for each \(t\in J\). Then \(V\) is relatively compact in \(E\). In view of the Ascoli-Arzela theorem, \(V\) is relatively compact in \(D\). Applying now Theorem 1, we conclude that \(\aleph\) has a fixed point which is a solution of (1).

4. Example

Example 1. Let us consider the following Hilfer fractional boundary value problem;

\begin{equation} \label{1.10} \left\{ \begin{array}{ll} D^{\frac{3}{2},\frac{2}{3}}_{0^{+}}y(t)=f(t,y(t)), & \hbox{\(t\in J:=[0,1]\);} \\ D^{\frac{1}{6}}y(0)+I^{\frac{1}{3}}y(\frac{1}{3})=1, & \hbox{\(0< q_{1}\leq1\);} \\ D^{\frac{1}{6}}y(1)+I^{\frac{1}{6}}y(\frac{2}{3})=2, & \hbox{\(0< q_{2}\leq1\),} \end{array} \right. \end{equation}
(19)
where \(\alpha=\frac{3}{2},\)     \(\beta=\frac{2}{3},\)     \(\gamma=\frac{7}{6},\)     \(T=1,\)     \(a_{1}=a_{2}=1,\)     \( b_{1}=b_{1}=1,\)     \(\lambda_{1}=1,\)     \(\lambda_{2}=2,\)     \(q_{2}=\frac{1}{6},\)     \(q_{1}=\frac{1}{2},\)     \(\eta_{1}=\frac{1}{3},\)     \(\eta_{2}=\frac{2}{3}.\)

Let \(E=l^{1}=\{ x = (x_{1}, x_{2}, ..., x_{n}, ...) :\sum_{n=1}^{\infty}|x_{n}| < \infty \}\) with the norm \(\|y_{n}\|_{E}=\sum_{n=1}^{\infty}|x_{n}|.\) Set \(y=(y_{1},y_{2},...,y_{n},... ), f=(f_{1},f_{2},...,f_{n},... ),\) with \(f(t,yt))=\frac{1}{e^{t+2}}|y_{n}(t)|, t\in J\). Clearly, the function \(f\) is continuous. For each \(y_{n}\in \mathbb{R}\) and \(t\in J\), we have \(\|f(t,y(t))\|\leq \frac{1}{e^{t+2}}\|y_{n}\|\). Hence conditions (H1), (H2) and (H3) hold with \(p(t)=\frac{1}{e^{t+2}} , t\in J\) and (H3) is satisfied with \(p^{*}=e^{-2}\). Now, we can find that \(p^{*} L\simeq\frac{693}{2500}\leq 1,\) hence (H4) is satisfied and we have \( p^{*}R L+K\leq R.\) Thus \(R>\frac{K}{1-Lp^{*}},\) so \(R>\frac{10053}{2000}\). Consequently, Theorem 3 implies that Problem (19) has a solution defined on \(J\).

5. Conclusions

In this paper, we consider the existence of solutions of the boundary value problem for a nonlinear fractional differential equation. Several existence and uniqueness results have been derived by using a method involving a measure of noncompactness and a fixed point theorem of Mönch type. Our results are quite general and give rise to many new cases by assigning different values to the parameters involved in the problem. For explanation, we enlist some special cases.

In case we choose \(a_{1}=a_{2}=T=\beta=1\), \(b_{1}=b_{2}=-1\) and \(\lambda_{1}=\lambda_{2}=0\) the Problem (1) reduces to the case considered in [26] in the scalar case using the standard tools of fixed point theory and Leray-Schauder nonlinear alternative. Here we extend the results of [26] to cover the abstract case. We remark the cases when considered in conclusion in [26] also exist here.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of interest

The authors declare no conflict of interest.

References

  1. Hilfer, R. (2000). Applications of fractional calculus in physics. Singapore, World scientific. [Google Scholor]
  2. Kamocki, R., & Obczynski, C. (2016). On fractional Cauchy-type problems containing Hilfer's derivative. Electronic Journal of Qualitative Theory of Differential Equations, 2016(50), 1-12. [Google Scholor]
  3. Abbas, S., Benchohra, M., & N'Guerekata, G. M. (2014). Advanced fractional differential and integral equations.Nova Science Publishers. [Google Scholor]
  4. Samko, S. G., Kilbas, A. A., & Marichev, O. I. (1993). Fractional integrals and derivatives (Vol. 1). Yverdon-les-Bains, Switzerland: Gordon and Breach Science Publishers, Yverdon. [Google Scholor]
  5. Kilbas, A. A., Srivastava, H. M., & Trujillo, J. J. (2006). Theory and applications of fractional differential equations (Vol. 204). elsevier. [Google Scholor]
  6. Yong, Z. (2014). Basic Theory Of Fractional Differential Equations (Vol. 6). World Scientific.[Google Scholor]
  7. Abbas, S., Benchohra, M., Henderson, J., & Lazreg, J. E. (2017). Measure of noncompactness and impulsive Hadamard fractional implicit differential equations in Banach spaces. Mathematics in Engineering, Science & Aerospace (MESA), 8(3), 1-19. [Google Scholor]
  8. Abbas, S., Benchohra, M., Lazreg, J. E., & Zhou, Y. (2017). A survey on Hadamard and Hilfer fractional differential equations: analysis and stability. Chaos, Solitons & Fractals, 102, 47-71. [Google Scholor]
  9. Abbas, S., Benchohra, M., Lazreg, J. E., & Nieto, J. J. (2018). On a coupled system of Hilfer and Hilfer # Hadamard fractional defferential equation in Banach spaces. Journal of Nonlinear Functional Analysis, Article ID 12.[Google Scholor]
  10. Hilfer, R., Luchko, Y., & Tomovski, Z. (2009). Operational method for the solution of fractional differential equations with generalized Riemann-Liouville fractional derivatives. Fractional Calculus and Applied Analysis, 12(3), 299-318. [Google Scholor]
  11. Bhairat, S. P. (2019). Existence and continuation of solutions of Hilfer fractional differential equations. Journal of Mathematical Modeling, 7(1), 1-20. [Google Scholor]
  12. Vivek, D., Kanagarajan, K., & Elsayed, E. M. (2018). Nonlocal initial value problems for implicit differential equations with Hilfer-Hadamard fractional derivative. Nonlinear Analysis: Modelling and Control, 23(3), 341-360. [Google Scholor]
  13. Wang, J., & Zhang, Y. (2015). Nonlocal initial value problems for differential equations with Hilfer fractional derivative. Applied Mathematics and Computation, 266, 850-859. [Google Scholor]
  14. Yang, M., & Wang, Q. R. (2017). Approximate controllability of Hilfer fractional differential inclusions with nonlocal conditions. Mathematical Methods in the Applied Sciences, 40(4), 1126-1138. [Google Scholor]
  15. Agarwal, R. P., Meehan, M., & O'regan, D. (2001). Fixed point theory and applications (Vol. 141). Cambridge university press. [Google Scholor]
  16. Bana, J., & Goebel, K. (1980). Measures of noncompactness in Banach spaces. Lecture Notes in Pure and Applied Mathematics, 60, 97 pages. [Google Scholor]
  17. Banas, J., Jleli, M., Mursaleen, M., Samet, B., & Vetro, C. (Eds.). (2017). Advances in nonlinear analysis via the concept of measure of noncompactness. Springer Singapore.[Google Scholor]
  18. Benchohra, M., Henderson, J., & Seba, D. (2008). Measure of noncompactness and fractional differential equations in Banach spaces. Communications in Applied Analysis, 12(4), 419-428.[Google Scholor]
  19. Akhmerov, R. R., Kamenskii, M. I., Potapov, A. S., Rodkina, A. E., & Sadovskii, B. N. (1992). Measures of noncompactness and condensing operators(Vol. 55). Basel, Birkhäuser. [Google Scholor]
  20. Alvárez, J. C. (1985). Measure of noncompactness and fixed points of nonexpansive condensing mappings in locally convex spaces. Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales (Espana), 79(1-2), 53-66. [Google Scholor]
  21. Mönch, H. (1980). Boundary value problems for nonlinear ordinary differential equations of second order in Banach spaces. Nonlinear Analysis: Theory, Methods & Applications, 4(5), 985-999. [Google Scholor]
  22. Szufla, S. (1986). On the application of measure of noncompactness to existence theorems. Rendiconti del Seminario Matematico della Universita di Padova, 75, 1-14. [Google Scholor]
  23. Furati, K. M., Kassim, M. D., & Tatar, N. E. (2013). Non-existence of global solutions for a differential equation involving Hilfer fractional derivative. Electronic Journal of Differential Equations, 2013(235), 1-10. [Google Scholor]
  24. Kou, C., Liu, J., & Ye, Y. (2010). Existence and uniqueness of solutions for the Cauchy-type problems of fractional differential equations. Discrete Dynamics in Nature and Society, 2010, Article ID 142175. [Google Scholor]
  25. Furati, K. M., & Kassim, M. D. (2012). Existence and uniqueness for a problem involving Hilfer fractional derivative. Computers & Mathematics with Applications, 64(6), 1616-1626. [Google Scholor]
  26. Ahmad, B., Ntouyas, S. K., & Assolami, A. (2013). Caputo type fractional differential equations with nonlocal Riemann-Liouville integral boundary conditions. Journal of Applied Mathematics and computing, 41(1-2), 339-350.[Google Scholor]
  27. Ahmad, B., Ntouyas, S. K., Tariboon, J., & Alsaedi, A. (2017). Caputo type fractional differential equations with nonlocal Riemann-Liouville and Erdélyi-Kober type integral boundary conditions. Filomat, 31(14), 4515-4529. [Google Scholor]
  28. Banas, J., & Nalepa, R. (2016). On a measure of noncompactness in the space of functions with tempered increments. Journal of Mathematical Analysis and Applications, 435(2), 1634-1651. [Google Scholor]
  29. Banas, J., & Olszowy, L. (2001). Measures of noncompactness related to monotonicity. Annales Societatis Mathematicae Polonae. Seria 1: Commentationes Mathematicae, 41, 13-23. [Google Scholor]
  30. Banas, J., & Sadarangani, K. (2008). On some measures of noncompactness in the space of continuous functions. Nonlinear Analysis: Theory, Methods & Applications, 68(2), 377-383. [Google Scholor]
  31. Hamani, S., & Benhamida, W. (2018). Measure of Noncompactness and Caputo-Hadamard Fractional Differential Equations in Banach Spaces. Eurasian Bulletin of Mathematics, 1(3), 98-106. [Google Scholor]
  32. Vivek, D., Kanagarajan, K., & Sivasundaram, S. (2018). On the behavior of solutions of Hilfer# Hadamard type fractional neutral pantograph equations with boundary conditions. Communications in Applied Analysis, 22(3), 211-232. [Google Scholor]
  33. Dajun, G., Lakshmikantham, V., & Xinzhi, L. (1996). Nonlinear integral equations in abstract spaces. The Netherlands: Kluwer Acadmic Publishers. [Google Scholor]
  34. Gu, H., & Trujillo, J. J. (2015). Existence of mild solution for evolution equation with Hilfer fractional derivative. Applied Mathematics and Computation, 257, 344-354. [Google Scholor]
  35. Haddouchi, F. (2018). Existence results for a class of Caputo type fractional differential equations with Riemann-Liouville fractional integrals and Caputo fractional derivatives in boundary conditions. arXiv preprint arXiv:1805.06015. [Google Scholor]
]]>
Approximate solution of nonlinear ordinary differential equation using ZZ decomposition method https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/approximate-solution-of-nonlinear-ordinary-differential-equation-using-zz-decomposition-method/ Mon, 14 Dec 2020 12:33:39 +0000 https://old.pisrt.org/?p=4788
OMS-Vol. 4 (2020), Issue 1, pp. 448 - 455 Open Access Full-Text PDF
Mulugeta Andualem, Atinafu Asfaw
Abstract: Nonlinear initial value problems are somewhat difficult to solve analytically as well as numerically related to linear initial value problems as their variety of natures. Because of this, so many scientists still searching for new methods to solve such nonlinear initial value problems. However there are many methods to solve it. In this article we have discussed about the approximate solution of nonlinear first order ordinary differential equation using ZZ decomposition method. This method is a combination of the natural transform method and Adomian decomposition method.
]]>

Open Journal of Mathematical Sciences

Approximate solution of nonlinear ordinary differential equation using ZZ decomposition method

Mulugeta Andualem\(^1\), Atinafu Asfaw
Department of Mathematics, Bonga University, Bonga, Ethiopia.; (M.A)
Department of Mathematics, Bonga University, Bonga, Ethiopia.; (A.A)
\(^{1}\)Corresponding Author: mulugetaandualem4@gmail.com

Abstract

Nonlinear initial value problems are somewhat difficult to solve analytically as well as numerically related to linear initial value problems as their variety of natures. Because of this, so many scientists still searching for new methods to solve such nonlinear initial value problems. However there are many methods to solve it. In this article we have discussed about the approximate solution of nonlinear first order ordinary differential equation using ZZ decomposition method. This method is a combination of the natural transform method and Adomian decomposition method.

Keywords:

ZZ transform, Adomain decomposition, Adomain polynomial, nonlinear differential equation.

1. Introduction

In the literature there are numerous integral transforms [1] that are widely used in physics, astronomy as well as in engineering. In order to solve the differential equations, the integral transform was extensively used and thus there are several works on the theory and application of integral transform such as the Laplace, Fourier, Mellin, Hankel, Fourier Transform, Sumudu Transform, Elzaki Transform and Aboodh Transform. Aboodh Transform [2,3] was introduced by Khalid Aboodh in 2013, to facilitate the process of solving ordinary and partial differential equations in the time domain. This transformation has deeper connection with the Laplace and Elzaki Transform [4,5,6]. New integral transform, named as ZZ Transformation [7,8,9,10] introduce by Zain Ul Abadin Zafar. ZZ transform was successfully applied to integral equations and ordinary differential equations. The main objective of this article is to solve nonlinear ordinary differential equation using ZZ transform.

2. ZZ transform

Let \(f(t) \) be a function defined for all \(t\ge 0. \) The ZZ transform of \(f(t) \) is the function \(Z(u,\ s) \) defined by
\begin{equation} \label{eq1} Z\left(u,\ s\right)=H\left\{f\left(t\right)\right\}=s\int^{\infty }_0{f\left(ut\right)e^{-st}dt},\end{equation}
(1)
or Equation (1) equivalent to
\begin{equation} \label{eq1.1}Z\left(u,\ s\right)=H\left\{f\left(t\right)\right\}=\frac{s}{u}\int^{\infty }_0{f\left(t\right)e^{\frac{-s}{u}t}dt}.\end{equation}
(2)
Table 1. ZZ transform of some functions.
\(f(t) \) \(H\left\{f\left(t\right)\right\}=Z(u,\ s) \)
\(1 \) \(1 \)
\(t \) \(\frac{u}{s} \)
\(t^2 \) \(\frac{2!u^2}{s^2} \)
\(t^n \) \(\frac{n!u^n}{s^n} \)
\(e^{at} \) \(\frac{s}{s-au} \)
\({\mathrm{cos} (at)\ } \) \(\frac{s^2}{s^2+{\alpha }^2u^2} \)
\({\mathrm{sin} (at)\ } \) \(\frac{aus}{s^2+{\left(au\right)}^2} \)
\end{table}

2.1. The ZZ decomposition method

Consider the general nonlinear ordinary differential equation of the form:
\begin{equation} \label{eq1.2}Lv+Rv+Nv=g\left(t\right),\end{equation}
(3)
with initial condition
\begin{equation} \label{eq1.3}v\left(0\right)=f\left(t\right),\end{equation}
(4)
where \(v \) is the unknown function, \(L \) is the linear differential operator of highest derivative, \(R \) is the reminder of the differential operator, \(g(t \)) is nonhomogeneous term and \(N(v) \) is the nonlinear term.

Suppose \(L \) is a differential operator of the first order, then by taking the ZZ transform of Equation (3), we have

\begin{equation} \label{eq1.4}\frac{s}{u}V\left(u,\ s\right)-\frac{s}{u}V\left(0\right)+H\left[Rv\right]+H\left[Nv\right]=H\left[g\left(t\right)\right].\end{equation}
(5)
Substituting the given initial condition from Equation (4), we get \[\frac{s}{u}V\left(\ u,\ s\right)-\frac{s}{u}f(t)+H\left[Rv\right]+H\left[Nv\right]=H\left[g\left(t\right)\right],\] or equivalent to
\begin{equation} \label{eq1.5}V\left(u,\ s\right)=f\left(t\right)+\frac{u}{s}H\left[Rv\right]-\frac{u}{s}H\left[Rv+Nv\right].\end{equation}
(6)
Since, the solution can be written in the form of \(v(t) \) and also \(V(u,\ s) \) is the ZZ transform of \(v(t) \). Therefore, Taking the inverse ZZ transform of Equation (6) to obtain the solution in the form of \(v(t) \).
\begin{equation} \label{eq1.6}v\left(t\right)=G\left(t\right)-H^{-1}\left[\frac{u}{s}H\left[Rv+Nv\right]\right].\end{equation}
(7)
We now assume an infinite series solution of the unknown function \(v(t) \) of the form
\begin{equation} \label{eq1.7}v\left(t\right)=\sum^{\infty }_{n=0}{v_n\left(t\right)}.\end{equation}
(8)
The nonlinear operator \(Nv={\Psi }(v) \) is decomposed as \[Nv=\sum^{\infty }_{n=0}{A_n\left(t\right)},\] where, \(A_n\ \)is called Adomian's polynomials. This can be calculated for various classes of nonlinearity according to \[A_n=\frac{1}{n!}\frac{d^n}{{d\lambda }^n}[{\Psi }\left(\sum^n_{i=0}{{{\lambda }^iv}_i}\right)]_{\lambda =0}.\] By using Equation (8), the Equation (7), can be rewritten as;
\begin{equation} \label{eq1.8}\sum^{\infty }_{n=0}{v_n\left(t\right)}=G\left(t\right)-H^{-1}\left[\frac{u}{s}H\left[R\sum^{\infty }_{n=0}{v_n\left(t\right)}+\sum^{\infty }_{n=0}{A_n\left(t\right)}\right]\right].\end{equation}
(9)
Now, if we compare both sides of Equation (9), we can get the following recurrence relation \begin{align*}v_0&=G(t),\\ v_1&={-H}^{-1}\left[\frac{u}{s}H\left[Rv_0(t)+A_0(t)\right]\right],\\ v_2&={-H}^{-1}\left[\frac{u}{s}H\left[Rv_1(t)+A_1(t)\right]\right],\\ v_3&={-H}^{-1}\left[\frac{u}{s}H\left[Rv_2(t)+A_2(t)\right]\right].\end{align*} Finally, we have the following general recurrence relation;
\begin{equation} \label{eq1.9}v_{n+1}={-H}^{-1}\left[\frac{u}{s}H\left[Rv_n(t)+A_n(t)\right]\right],\ n\ge 0.\end{equation}
(10)
Therefore, the exact or approximate solution is given by
\begin{equation} \label{eq2}v\left(t\right)=\sum^{\infty }_{n=0}{v_n\left(t\right)}.\end{equation}
(11)

Example 1. Consider the non-linear system of initial value problems given by

\begin{equation} \label{eq2.1}y'={[y(t)]}^{2},\ \ \ y\left(0\right)=1, \end{equation}
(12)
with exact solution \(y\left(t\right)=\frac{\mathrm{1}}{\mathrm{1-}t}. \) Applying ZZ Transform of Equation (12), we have
\begin{equation} \label{eq2.2}\frac{s}{u}Y\left(u,\ s\right)-\frac{s}{u}y\left(0\right)=H\left[y^2\right]. \end{equation}
(13)
Substitute the given initial from Equation (13)
\begin{equation} \label{eq2.3}\frac{s}{u}Y\left(u,\ s\right)-\frac{s}{u}=H\left[y^2\right] .\end{equation}
(14)
After simple calculation from Equation (14), we have
\begin{equation} \label{eq2.4}Y\left(u,\ s\right)=1+\frac{u}{s}H\left[y^2\right].\end{equation}
(15)
By taking the inverse ZZ transform of Equation (15), we have
\begin{equation} \label{eq2.5}y\left(t\right)=1+H^{-1}\left[\frac{u}{s}H\left[y^2\right]\ \right]. \end{equation}
(16)
We now assume an infinite series solution of the unknown function \(y\left(t\right) \) of the form
\begin{equation} \label{eq2.6}y\left(t\right)=\sum^{\infty }_{n=0}{y_n}(t).\end{equation}
(17)
By using Equation (17), we can write Equation (16) in the form
\begin{equation} \label{eq2.7}\sum^{\infty }_{n=0}{y_n\left(t\right)=1+H^{-1}\left[\frac{u}{s}\left[H\sum^{\infty }_{n=0}{A_n(t)}\right]\right]\ },\end{equation}
(18)
where, \(A_n\ \) is called Adomian's polynomials of the nonlinear term \(y^2(t) \). Now, by comparing both sides of Equation (18), we can get the following recurrence relation; \begin{align*}y_0\left(t\right)&=1,\\ y_1(t)&=H^{-1}\left[\frac{u}{s}H\left[A_0(t)\right]\right],\\ y_2(t)&=H^{-1}\left[\frac{u}{s}H\left[A_1(t)\right]\right],\\ y_3(t)&=H^{-1}\left[\frac{u}{s}H\left[A_2(t)\right]\right].\end{align*} Finally, we have the following general recurrence relation;
\begin{equation} \label{eq2.8}y_{n+1}\left(t\right)=H^{-1}\left[\frac{u}{s}H\left[A_n\left(t\right)\right]\right],\ n\ge 0.\end{equation}
(19)
Now, by using the recursive relation in Equation (19), we can easily compute the remaining components of the unknown function \(y\left(t\right) \) in the following manner \begin{align*}y_1(t)&=H^{-1}\left[\frac{u}{s}H\left[A_0(t)\right]\right]{=H}^{-1}\left[\frac{u}{s}H\left[{y_0}^2(t)\right]\right] {=H}^{-1}\left[\frac{u}{s+u}H{\left(1\right)}^2\right],\\ y_1(t)&={H}^{-1}\left[\frac{u}{s}\times 1\right]{=H}^{-1}\left[\frac{s}{s}\right]=t.\\ y_2\left(t\right)&=H^{-1}\left[\frac{u}{s}H\left[A_1\left(t\right)\right]\right]=H^{-1}\left[\frac{u}{s}H\left[2y_0\left(t\right)y_1\left(t\right)\right]\right] =H^{-1}\left[\frac{u}{s}H\left(2t\right)\right]={2H}^{-1}\left[\frac{u^2}{s^2}\right]=t^2.\end{align*} Similarly, we can find \(\ y_3\left(t\right) \) \[y_3\left(t\right)=H^{-1}\left[\frac{u}{s}H\left[A_2\left(t\right)\right]\right]=H^{-1}\left[\frac{u}{s}H\left[2y_0\left(t\right)y_2\left(t\right)\right]+{\left(y_1\left(t\right)\right)}^2\right].\] After some calculation, we obtain \(t^3 \) and so on. Hence, the approximate solution is given by; \[y\left(t\right)=\sum^{\infty }_{n=0}{y_n}\left(t\right)=y_0\left(t\right)+y_1\left(t\right)+y_2\left(t\right)+y_3\left(t\right)+\dots =1+t+t^2+t^3+\dots =\frac{1}{1-t}.\] The Octave Code is;\begin{align*}&\gg t=\left[0:0.05:0.9\right];\\ &\gg f=\frac{1}{1-t};\\ &\gg g=1+t+t\wedge 2+t\wedge 3+t\wedge 5+t\wedge 6+t\wedge 7+\dots;\\ &\gg plot(t,f,'r',t,g,'o');\\ &\gg ylabel('y{\left(t\right)}');\\ &\gg xlabel('t=0:0.9');\\ &\gg legend(`Exact',\ `Approximate' ).\end{align*} Hence, the exact solution is in closed agreement with the result obtained by ZZ decomposition

Example 2. Consider the non-linear system of initial value problems given by

\begin{equation} \label{eq2.9}x'={1-[x(t)]}^{2},\ \ \ x\left(0\right)=0. \end{equation}
(20)
Using the method of separation of variables, the exact solution is \(x\left(t\right)=\frac{e^{2t}-1}{e^{2t}+1}. \) Applying ZZ Transform of Equation (20), we have
\begin{equation} \label{eq3}\frac{s}{u}X\left(u,\ s\right)-\frac{s}{u}x\left(0\right)=1-H\left[x^2\right].\end{equation}
(21)
Substitute the given initial condition from Equation (21), we have
\begin{equation} \label{eq3.1}\frac{s}{u}X\left(u,\ s\right)=1-H\left[x^2\right].\end{equation}
(22)
After simple calculation from Equation (22), we have
\begin{equation} \label{eq3.2}X\left(u,\ s\right)=\frac{u}{s}-\frac{u}{s}H\left[x^2\right].\end{equation}
(23)
By taking the inverse ZZ transform of Equation (23), we have
\begin{equation} \label{eq3.4}x\left(t\right)=t-H^{-1}\left(\frac{u}{s}H\left[x^2\right]\right).\end{equation}
(24)
We now assume an infinite series solution of the unknown function \(x\left(t\right) \) of the form
\begin{equation} \label{eq3.5}x\left(t\right)=\sum^{\infty }_{n=0}{x_n}(t).\end{equation}
(25)
By using Equation (25), we can write Equation (24) in the form
\begin{equation} \label{eq3.6}\sum^{\infty }_{n=0}{x_n\left(t\right)=t-H^{-1}\left[\frac{u}{s}\left[H\sum^{\infty }_{n=0}{A_n(t)}\right]\right]\ },\end{equation}
(26)
where, \(A_n \) is called Adomian's polynomials of the nonlinear term \(x^2(t) \). Now, by comparing both sides of Equation (26), we can get the following recurrence relation: \begin{align*}x_0\left(t\right)&=t,\\ x_1\left(t\right)&=-H^{-1}\left[\frac{u}{s}H\left[A_0(t)\right]\right],\\ x_2\left(t\right)&=-H^{-1}\left[\frac{u}{s}H\left[A_1(t)\right]\right],\\ x_3(t)&={-H}^{-1}\left[\frac{u}{s}H\left[A_2(t)\right]\right].\end{align*} Finally, we have the following general recurrence relation
\begin{equation} \label{eq3.7}x_{n+1}\left(t\right)={-H}^{-1}\left[\frac{u}{s}H\left[A_n\left(t\right)\right]\right],\ n\ge 0 .\end{equation}
(27)
Then by using the recursive relation in Equation (27), we can easily compute the remaining components of the unknown function \(x\left(t\right) \) in the following manner \begin{align*}x_1(t)&={-H}^{-1}\left[\frac{u}{s}H\left[A_0(t)\right]\right] {=-H}^{-1}\left[\frac{u}{s}H{[x_0(t)]}^2\right] {=-H}^{-1}\left[\frac{u}{s+u}H{\left(t\right)}^2\right] {=-H}^{-1}\left[\frac{u}{s}\times 2!\frac{u^2}{s^2}\right]\\ & {=-2!H}^{-1}\left[\frac{u^3}{s^3}\right]=-\frac{t^3}{3},\\ x_2\left(t\right)&={-H}^{-1}\left[\frac{u}{s}H\left[A_1\left(t\right)\right]\right]={-H}^{-1}\left[\frac{u}{s}H\left[2y_0\left(t\right)y_1\left(t\right)\right]\right] ={-H}^{-1}\left[\frac{u}{s}H\left(2t\times -\frac{t^3}{3}\right)\right]\\ &={-H}^{-1}\left[\frac{u}{s}H\left(\frac{-2}{3}t^4\right)\right]={-H}^{-1}\left[\frac{-2}{3}\frac{u}{s}H\left(t^4\right)\right]=\frac{2}{3}H^{-1}\left[4!\frac{u^5}{s^5}\right] =\frac{2}{3}\times 4!\ H^{-1}\left[\frac{u^5}{s^5}\right]\\ &=\frac{2}{3}\times 4!\frac{t^5}{5!}=\frac{2}{15}t^5.\end{align*} Similarly, we can find \(\ x_3\left(t\right) \) \[x_3\left(t\right)=-H^{-1}\left[\frac{u}{s}H\left[A_2\left(t\right)\right]\right]={-H}^{-1}\left[\frac{u}{s}H\left[2x_0\left(t\right)x_2\left(t\right)\right]+{\left(x_1\left(t\right)\right)}^2\right].\] After, some calculation step, we obtain \(x_3\left(t\right)=\frac{-17}{315}t^7 \) and so on. Hence, the approximate solution is given by; \[x\left(t\right)=\sum^{\infty }_{n=0}{x_n}\left(t\right)=x_0\left(t\right)+x_1\left(t\right)+x_2\left(t\right)+x_3\left(t\right)+\dots=t-\frac{t^3}{3}+\frac{2}{15}t^5-\frac{17}{315}t^7+\dots\,. \] We obtain the following graph, that is the comparison of approximate and exact solution of the given differential equation depend on the order of expansion using Octave. The line (graph) in the red color indicates the actual solution, while the ring line (o) indicates the approximate solution.

Example 3. Solve

\begin{equation} \label{eq3.8}\frac{dv}{dt}{+\left(\frac{dv}{dt}\right)}^2=4v\left(t\right),\ \ \ \ \ \ \ \ \ v\left(0\right)=1. \end{equation}
(28)
Applying ZZ Transform of Equation (28), we have
\begin{equation} \label{eq3.9}\frac{s}{u}V\left(u,\ s\right)-\frac{s}{u}v\left(0\right)+H\left[{\left(\frac{dv}{dt}\right)}^2\right]=4V\left(u,s\right).\end{equation}
(29)
Substitute the given initial condition from Equation (29), we have
\begin{equation} \label{eq4}\frac{s}{u}V\left(u,\ s\right)-\frac{s}{u}=4V\left(u,s\right)-H\left[{\left(\frac{dv}{dt}\right)}^2\right].\end{equation}
(30)
After simple calculation from Equation (30), we have
\begin{equation} \label{eq4.1}V\left(u,\ s\right)=\frac{s}{s-4u}-\frac{u}{s-4u}H\left[{\left(\frac{dv}{dt}\right)}^2\right].\end{equation}
(31)
By taking the inverse ZZ transform of Equation (31), we have
\begin{equation} \label{eq4.2}v\left(t\right)=e^{4t}-\frac{u}{s-4u}H\left[{\left(\frac{dv}{dt}\right)}^2\right].\end{equation}
(32)
We now assume an infinite series solution of the unknown function \(v\left(t\right) \) of the form
\begin{equation} \label{eq4.3}v\left(t\right)=\sum^{\infty }_{n=0}{v_n}(t).\end{equation}
(33)
By using Equation (33), we can write Equation (32) in the form
\begin{equation} \label{eq4.4}\sum^{\infty }_{n=0}{x_n\left(t\right)=e^{4t}-H^{-1}\left[\frac{u}{s-4u}\left[H\sum^{\infty }_{n=0}{A_n(t)}\right]\right]\ },\end{equation}
(34)
where, \(A_n \) is called Adomian's polynomials of the nonlinear term \({\left(\frac{dv}{dt}\right)}^2 \). Now, by comparing both sides of Equation (34), we can get the following recurrence relation: \begin{align*} v_0\left(t\right)&=e^{4t},\\ v_1\left(t\right)&=-H^{-1}\left[\frac{u}{s-4u}H\left[A_0(t)\right]\right],\\ v_2\left(t\right)&=-H^{-1}\left[\frac{u}{s-4u}H\left[A_1(t)\right]\right],\\ v_3(t)&={-H}^{-1}\left[\frac{u}{s-4u}H\left[A_2(t)\right]\right].\end{align*} Finally, we have the following general recurrence relation
\begin{equation} \label{eq4.5}v_{n+1}\left(t\right)={-H}^{-1}\left[\frac{u}{s-4u}H\left[A_n\left(t\right)\right]\right],\ n\ge 0.\end{equation}
(35)
Then by using the recursive relation in Equation (35), we can easily compute the remaining components of the unknown function \(v\left(t\right) \) in the following manner \begin{align*}v_1(t)&={-H}^{-1}\left[\frac{u}{s-4u}H\left[A_0(t)\right]\right] {=-H}^{-1}\left[\frac{u}{s-4u}H{[{v'}_0]}^{2}\right] {=-H}^{-1}\left[\frac{u}{s-4u}H{\left[{\left(e^{4t}\right)}'\right]}^{2}\right]\\ & {=-4H}^{-1}\left[\frac{u}{s-4u}\times \frac{4s}{s-8u}\right] {=-4H}^{-1}\left[\frac{s}{s-4u}-\frac{s}{s-8u}\right] =-4e^{4t}+4e^{8t},\\ v_2\left(t\right)&={-H}^{-1}\left[\frac{u}{s-4u}H\left[A_1\left(t\right)\right]\right] ={-H}^{-1}\left[\frac{u}{s-4u}H\left[2v_0\left(t\right)y_1\left(t\right)\right]\right] ={-H}^{-1}\left[\frac{u}{s-4u}H\left(-8e^{8t}+8e^{12t}\right)\right]\\ &={-H}^{-1}\left[\frac{-8us}{(s-4u)(s-8u)}+\frac{8us}{(s-4u)(s-12u)}\right] =-{H}^{-1}\left[-2\left(\frac{s}{s-8u}-\frac{s}{s-4u}\right)+\frac{s}{s-12u}-\frac{s}{s-4u}\right]\\ &=2e^{8t}-e^{4t}-e^{12t}.\end{align*} Similarly, we can find \(\ v_3\left(t\right) \) as; \[v_3\left(t\right)=-H^{-1}\left[\frac{u}{s-4u}H\left[A_2\left(t\right)\right]\right]={-H}^{-1}\left[\frac{u}{s-4u}H\left[2v_0\left(t\right)v_2\left(t\right)\right]+{\left(v_1\left(t\right)\right)}^2\right].\] After, some calculation step, we obtain \begin{align*}v_3\left(t\right)&=-H^{-1}\left[\frac{u}{s-4u}H\left[14e^{8t}-28e^{12t}+14e^{16t}\right]\right]\\ & =-H^{-1}\left[\frac{u}{s-4u}H\left[\frac{14s}{s-8u}-\frac{28s}{s-12u}+\frac{14s}{s-16u}\right]\right]\\ &=-\frac{7}{2}H^{-1}\left[\frac{s}{s-8u}-\frac{s}{s-4u}\right]-\frac{7}{2}H^{-1}\left[\frac{s}{s-4u}-\frac{s}{s-12u}\right]-\frac{7}{6}H^{-1}\left[\frac{s}{s-16u}-\frac{s}{s-4u}\right]\\ &=-\frac{7}{2}e^{8t}+\frac{7}{2}e^{12t}-\frac{7}{6}e^{16t}+\frac{7}{6}e^{4t}.\end{align*} Hence, the approximate solution is given by; \[v\left(t\right)=\sum^{\infty }_{n=0}{v_n}\left(t\right)=v_0\left(t\right)+v_1\left(t\right)+v_2\left(t\right)+v_3\left(t\right)+\dots =-\frac{17}{6}e^{4t}+\frac{5}{2}e^{8t}-\frac{5}{2}e^{12t}-\frac{7}{6}e^{16t}+\dots \,.\]

3. Conclusion

In this paper, the ZZ decomposition method has been successfully applied to find approximate solution of the first order initial value problems of nonlinear ordinary differential equations. If the approximate solution of the given problems is compared with their analytical solutions, the ZZ decomposition is very effective and convergence are quite close. It may be concluded that ZZ decomposition method is very powerful and efficient in finding analytical as well as numerical solutions for wide classes of nonlinear ordinary differential equations.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest

''The authors declare no conflict of interest.''

References

  1. Abdelilah, K., & Hassan, S. (2017). The use of Kamal transform for solving partial differential equations. Advances in Theoretical and Applied Mathematics, 12(1), 7-13. [Google Scholor]
  2. Aggarwal, S., Gupta, A. R., Singh, D. P., Asthana, N., & Kumar, N. (2018). Application of Laplace transform for solving population growth and decay problems. International Journal of Latest Technology in Engineering, Management & Applied Science, 7(9), 141-145. [Google Scholor]
  3. Debnath, L., & Bhatta, D. (2014). Integral transforms and their applications. CRC press. [Google Scholor]
  4. Chauhan, R., & Aggarwal, S. (2018). Solution of linear partial integro-differential equations using Mahgoub transform. Periodic Research, 7(1), 28-31. [Google Scholor]
  5. Aggarwal, S., Sharma, N., Chauhan, R., Gupta, A. R., & Khandelwal, A. (2018). A new application of Mahgoub transform for solving linear ordinary differential equations with variable coefficients. Journal of Computer and Mathematical Sciences, 9(6), 520-525. [Google Scholor]
  6. Zill, D. G. (2016). Advanced engineering mathematics. Jones & Bartlett. [Google Scholor]
  7. Shaikh, S. L. (2018). Introducing a new integral transform: Sadik Transform. American International Journal of Research in Science, Technology, Engineering & Mathematics, 22(1), 100-102. [Google Scholor]
  8. Aggarwal, S., Sharma, N., & Chauhan, R. (2018). Applications of Kamal Transform for solving Volterra integral equation of first kind. International Journal of Research in Advent Technology, 6(8), 2081-2088. [Google Scholor]
  9. Song, Y., & Kim, H. (2014). The solution of Volterra integral equation of the second kind by using the Elzaki transform. Applied Mathematical Sciences, 8(11), 525-530. [Google Scholor]
  10. Mahgoub, M. M. A. (2019). The new integral transform''Sawi Transform''. Advances in Theoretical and Applied Mathematics, 14(1), 81-87. [Google Scholor]
  11. Elzaki, T. M., & Elzaki, S. M. (2011). On the ELzaki Transform and System of Partial Differential Equations. Advances in Theoretical and Applied Mathematics, 6(1), 115-123. [Google Scholor]
  12. Osman, M., & Bashir, M. A. (2016). Solution of partial differential equations with variables coefficients using double Sumudu transform. International Journal of Scientific and Research Publications, 6, 37-46. [Google Scholor]
  13. Gore (Jagtap) Jyotsana, S., & Gore Shukracharya, S. (2015). Solution of partial integro-differential equations by using Laplace, Elzaki and double Elzaki transform methods. International Research Journal of Engineering and Technology, 2(3), 1825-1830. [Google Scholor]
]]>
Weak implicative UP-filters of UP-algebras https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/weak-implicative-up-filters-of-up-algebras/ Wed, 02 Dec 2020 13:50:30 +0000 https://old.pisrt.org/?p=4752
OMS-Vol. 4 (2020), Issue 1, pp. 442 - 447 Open Access Full-Text PDF
Daniel A. Romano, Young Bae Jun
Abstract: The concept of weak implicative UP-filters in UP-algebras is introduced and analyzed. Some characterizations of weak implicative UP-filters are derived with the using of some other filter types in such algebras.
]]>

Open Journal of Mathematical Sciences

Weak implicative UP-filters of UP-algebras

Daniel A. Romano\(^1\), Young Bae Jun
International Mathematical Virtual Institute 6, Kordunaška Street, 78000 Banja Luka, Bosnia and Herzegovina.; (D.A.R)
Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea.; (Y.B.J)
\(^{1}\)Corresponding Author: bato49@hotmail.com

Abstract

The concept of weak implicative UP-filters in UP-algebras is introduced and analyzed. Some characterizations of weak implicative UP-filters are derived with the using of some other filter types in such algebras.

Keywords:

UP-algebra, UP-filter, implicative UP-filter, comparative UP-filter, \(x\)-allied UP-filter, weak implicative UP-filter.

1. Introduction

Prabpayak and Leerawat [1,2] introduced the notion of a KU-algebra. In [3], Iampan introduced a new algebraic structure, called a UP-algebra, which is a generalization of a KU-algebra. He studied a UP-subalgebra and a UP-ideal. Somjanta et al., [4] introduced the concept of UP-filters. Then Jun and Iampan ([5,6,7]) developed several types of filters in these algebras such as comparative and implicative UP-filters. Romano also took part in analyzing filter properties in UP-algebras (for example [8,9]).

In this paper, we introduce the notion of a weak implicative UP-filter which is located between UP-filter and implicative UP-filter, and further studies the relationship between various types of UP-filters, that is, relations between UP-filter, implicative UP-filter, weak implicative UP-filter, comparative UP-filter and allied UP-filter. We provide conditions for a weak implicative UP-filter to be an implicative UP-filter. We consider conditions for a UP-filter to be a weak implicative UP-filter. We suggest the conditions for a weak implicative UP-filter to be an allied UP-filter.

2. Preliminaries

An algebra \(A = (A,\cdot,0)\) of type \((2,0)\) is called a UP-algebra (see [3]) if it satisfies the following axioms:
  • (UP-1) \((\forall x, y, \in A)((y \cdot z) \cdot ((x \cdot y) \cdot (x \cdot z)) = 0)\),
  • (UP-2) \((\forall x \in A)(0 \cdot x = x)\),
  • (UP-3) \((\forall x \in A)(x \cdot 0 = 0)\),
  • (UP-4) \((\forall x, y \in A)((x \cdot y = 0 \, \wedge\, y \cdot x = 0)\, \Longrightarrow \, x = y).\)
In this algebra, the order relation `\(\leqslant\,\)' is defined as follows \[(\forall x,y \in A)(x \leqslant y \, \Longleftrightarrow \, x \cdot y = 0).\] A subset \(F\) of a UP-algebra \(A\) is called a UP-filter of \(A\) (see [4]) if it satisfies the following conditions:
  • (F-1) \(0 \in F\),
  • (F-2) \((\forall x,y \in A)((x \in F \, \wedge \, x \cdot y \in F)\, \Longrightarrow \, y \in F)\).
It is clear that every UP-filter \(F\) of a UP-algebra \(A\) satisfies:
  • (1) \( (\forall x,y \in A) ((x \in F \, \wedge \, x \leqslant y )\, \Longrightarrow \, y \in F)\).
The family \(\mathfrak{ F }(A)\) of all UP-filters of a UP-algebra \( A \) is not empty and forms a complete lattice.

Definition 1. ([5], Definition 1) A subset \(F\) of a UP-algebra \(A\) is called an implicative UP-filter of \(A\) if it satisfies the following conditions:

  • (F-1) \(0 \in F\) and
  • (IF) \((\forall x, y, \in A)((x \cdot (y \cdot z) \in F \, \wedge \, x \cdot y \in F)\, \Longrightarrow \, x \cdot z \in F).\)

Example 1. Let \(S = \{1, 2, 3, 0\}\) and operations `\(\cdot\)' defined on \(A\) as follows:

\(\cdot\) 0 1 2 3
0 0 0 0 0
1 1 0 3 0
2 1 2 0 0
3 0 1 2 3
Then \(A = (A,\cdot,0)\) is a UP-algebra where the order relation `\(\leqslant\)' is defined as follows \(\leqslant = \{(0,0), (1,1), (2,2), (3,3), (1,0), (2,0), (3,0), (1,2), (1,3)\}\). The subsets \(\{0\}\), \(\{0,2\}\), \(\{0,3\}\) and \(\{0,2,3\}\) are implicative UP-filters of \(A\).

Note that if we put \( x = 1 \), \( y = x \) and \( z = y \) in (IF) and use (UP-2), then every implicative UP-filter is a UP-filter (see also [5], Theorem 1). But a UP-filter may not be an implicative UP-filter as seen in the following example.

Example 2. ([5], Example 2) Consider a UP-algebra \(A = \{0, 1, 2, 3\}\) with the binary operation `\(\cdot\,\)' given in the following table

\(\cdot\) 0 1 2 3
0 0 1 2 3
1 0 0 1 2
2 0 0 0 2
3 0 0 0 0
Here the order relation `\(\leqslant\,\)' is given by \[\leqslant = \{(0,0), (1,1), (2,2), (3,3), (1,0), (2,0), (2,1), (3,0), (3,1), (3,2) \}.\] Then the subset \(\{0\}\) is a UP-filter of \(A\), but it is not an implicative UP-filter since \(2 \cdot (2 \cdot 3) = 0 \in \{0\}\) and \(2 \cdot 2 = 0 \in \{0\}\), but \(2 \cdot 3 = 2 \notin \{0\}\).

3. Weak implicative UP-filters of UP-algebras

This section introduces weak implicative UP-filter located between UP-filter and implicative UP-filter, and further studies the relationship between various types of UP-filters.

In what follows, let \(A\) denote a UP-algebra unless otherwise specified.

Definition 2. A subset \(F\) of \(A\) is called a weak implicative UP-filter of \(A\) if it satisfies the following conditions:

  • (F-1) \(0 \in F\),
  • (WIF) \((\forall x, y, z \in A)((x \cdot (y \cdot z) \in F \, \wedge \, x \cdot y \in F)\, \Longrightarrow \, x \cdot (x \cdot z) \in F)\).

Example 3. Let \( A = \{0,1,2,3,4 \}\) be a set with the operation `\( \cdot\)' given by

\(\cdot\) 0 1 2 3 4
0 0 1 2 3 4
1 0 0 0 0 0
2 0 2 0 0 0
3 0 2 2 0 0
4 0 2 2 4 0
Then \(A\) is a UP-algebra (see Example 1.12 in [3]), and \(F:=\{0,2\}\) is a weak implicative UP-filter of \(A\).

Proposition 1. Every weak implicative UP-filter \( F \) of \(A\) satisfies the following assertion. \((\forall x, z \in A)(x \cdot z \in F\, \Longrightarrow \, x \cdot (x \cdot z) \in F)\).

Proof. If we put \( y = 0 \) in (WIF), we immediately get the required implication.

Theorem 1. Every weak implicative UP-filter is a UP-filter.

Proof. If we put \( x = 0 \), \( y = x \) and \( z = y \) in (WIF), we get (F-2) by (UP-2).

The following example shows that the converse of Theorem 1 is not true.

Example 4. Let \(A = \{0, 1, 2, 3\}\) be a set with the binary operation `\(\cdot\)' which is given by the following table.

\(\cdot\) 0 1 2 3
0 0 1 2 3
1 0 0 1 2
2 0 0 0 1
3 0 0 0 0
Then \(A\) is a UP-algebra (see Example 3 in [6]). The subset \(F := \{0,1,2\}\) is a UP-filter of \(A\), but it is not a weak implicative UP-filter of \(A\) because \(0 \cdot (2 \cdot 3) = 1 \in F\) and \(0 \cdot 2 = 2 \in F\) but \(0 \cdot (0 \cdot 3) = 3\notin F\).

Lemma 1. ([3]) Every UP-algebra \(A\) satisfies the following condition \[(\forall x,y\in A)(x\le y\cdot x).\]

Theorem 2. Every implicative UP-filter is a weak implicative UP-filter.

Proof. Let \(F\) be an implicative UP-filter of \(A\). Then \(F\) is a UP-filter of \(A\) by Theorem 1. Let \(x, y, z \in A\) be such that \(x \cdot (y \cdot z) \in F\) and \(x \cdot y \in F\). Then \(x\cdot z \in F\) by (IF). Since \(x \cdot z \leqslant x \cdot (x \cdot z)\) by Lemma 1, it follows from (1) that \(x \cdot (x \cdot z) \in F\). Therefore \(F\) is a weak implicative UP-filter of \(A\).

The converse of Theorem 2 is not true as seen in the following example.

Example 5. Consider the UP-algebra \(A\) which is given in Example 2. The subset \(\{0\}\) is a weak implicative UP-filter of \(A\) but it is not an implicative UP-filter of \(A\).

We provide conditions for a weak implicative UP-filter to be an implicative UP-filter, and hence the following theorem is a characterization of an implicative UP-filter.

Theorem 3. Let \(F\) be a weak implicative UP-filter of \(A\). Then \(F\) is an implicative UP-filter of \(A\) if and only if

  • (2) \((\forall x, y \in A)(x \cdot (x \cdot y) \in F \, \Longleftrightarrow \, x \cdot y \in F)\).

Proof. Let \(F\) be a weak implicative UP-filter of \(A\). Then \(F\) be a UP-filter of \(A\) by Theorem 1. Assume that \( F \) is an implicative UP-filter of \( A \), and let \( x, y \in A \) be such that \( x \cdot(x \cdot y) \in F \). Since \(x\cdot x=0\in F\), it follows from (IF) that \(x\cdot y\in F\). Now suppose \(x\cdot y\in F\) for all \(x,y\in A\). Since \( x \cdot y \leqslant x \cdot (x \cdot y) \) by Lemma 1, it follows that \(x \cdot (x \cdot y) \in F\) by (1). Hence (2) is valid.

Conversely, assume that \(F\) satisfies the condition (2). Let \( x, y, z \in A \) be such that \( x \cdot (y \cdot z) \in F \) and \( x\cdot y \in F\). Since \(F\) is a weak implicative UP-filter, we get \(x \cdot (x \cdot z) \in F\) which implies from (2) that \(x \cdot z \in F\). So, \(F\) is an implicative UP-filter of \(A\).

In the next theorem, we consider conditions for a UP-filter to be a weak implicative UP-filter.

Theorem 4. If a UP-filter \(F\) of \(A\) satisfies the following condition

  • (3) \((\forall x,y, z \in A)(x \cdot (y \cdot z) \in F \, \Longrightarrow \, (x \cdot y)\cdot (x \cdot (x \cdot z)) \in F)\), then \(F\) is a weak implicative UP-filter of \(A\).

Proof. Let \(F\) be a UP-filter of \(A\) satisfying the condition (3). Let \(x,y,z \in A\) be such that \(x \cdot (y \cdot z) \in F\) and \(x \cdot y \in F\). Then \((x \cdot y)\cdot (x \cdot (x \cdot z))\in F\) by (3), and thus \(x \cdot (x \cdot z) \in F\) by (F-2). Therefore, \(F\) is a weak implicative UP-filter of \(A\).

In [5], Jun and Iampan introduced the notion of allied UP-filters in UP-algebras. The concept of allied UP-filters with respect to the element \(x\) in \(A\) is given by the following definition.

Definition 3. ([6], Definition 4) Let \(x\) be a fixed element of \(A\). A subset \(F\) of \(A\) is called an allied UP-filter of \(A\) with respect to \(x\) (briefly, \(x\)-allied UP-filter of \(A\)) if it satisfies the conditions

  • (F-1) \(0 \in F\) and
  • (FA) \((\forall y,z \in A)((x \cdot (y \cdot z) \in F \, \wedge \, x \cdot y \in F) \, \Longrightarrow \, z \in F)\).
We discuss relationship between allied UP-filters and (weak) implicative UP-filters. The following example shows that any weak implicative UP-filter may not be an \(x\)-allied UP-filter for some \(x\in A\).

Example 6. Consider the UP-algebra in Example 2. It can be proved without major difficulties that the set \(F :=\{0,1,2 \}\) is a weak implicative UP-filter of \( A \) but it is not a \(1\)-allied UP-filter because for \(x=1\), \(y = 0\) and \(z =3\) we have \(1 \cdot (0 \cdot 3) = 1 \cdot 3 = 2 \in F\) and \(1 \cdot 0 = 0 \in F\) but \(3 \notin F\).

Given an element \(x\) of \(A\), we suggest the conditions for a weak implicative UP-filter to be an \(x\)-allied UP-filter.

Theorem 5. Given an element \(x\) of \(A\), if a weak implicative UP-filter \(F\) of \(A\) satisfies the following condition

  • (4) \((\forall y \in A)(x \cdot (x \cdot y)\in F \, \Longrightarrow \, y \in F),\)
then \(F\) is an \(x\)-allied UP-filter of \(A\).

Proof. Let \(F\) be a weak implicative UP-filter of \(A\) satisfying the condition (4) for \(x\in A\). Let \(y,z \in A\) be such that \(x \cdot (y \cdot z) \in F\) and \(x \cdot y \in F\). Then \(x \cdot (x \cdot z) \in F\) by (WIF). It follows from (4) that \(z \in F\). Therefore, \(F\) is an \(x\)-allied UP-filter of \(A\).

The following example shows that any implicative UP-filter may not be an \(x\)-allied UP-filter for some \(x\in A\).

Example 7. Let \(A = \{0, a, b, c\}\) be a set with the binary operation `\(\cdot\)' which is given by the following table.

\(\cdot\) 0 a b c
0 a 1 b c
a 0 0 b 0
b a 0 0 c
c a 0 b 0
Then \(A\) is a UP-algebra (see Example 1.6 in [3]). The subset \(F := \{0,b\}\) is an implicative UP-filter of \(A\) (see Example 1 in [5]), but it is not a \(c\)-allied UP-filter of \(A\) because for \(x= c\), \(y = 0\) and \(z = c\) we have \(c \cdot (0 \cdot c) = 0 \in F\) and \(c \cdot 0 = 0 \in F\) but \(c \notin F\).

Corollary 1. Given an element \(x\) of \(A\), if an implicative UP-filter \(F\) of \(A\) satisfies the condition (4), then \(F\) is an \(x\)-allied UP-filter of \(A\).

Proposition 2. Given an element \(x\) of \(A\), every \( x \)-allied UP-filter \(F\) of \( A \) satisfies the condition (4).

Proof. Let \(y \in A\) be such \(x\cdot (x \cdot y) \in F\). Since \( x \cdot x = 0 \in F \), it follows from (FA) that \(y\in F\). This completes the proof.

We consider conditions for a UP-filter to be a weak implicative UP-filter.

Theorem 6. Let \(A\) be a UP-algebra which satisfies the condition

  • (5) \((\forall x,y,z \in A)(x \cdot (y \cdot z) = y \cdot (x \cdot z))\).
  • If any UP-filter \(F\) of \(A\) satisfies the following assertion
  • (6) \((\forall x,y \in A)(x \cdot (x \cdot y) = x \cdot y)\),
  • then \(F\) is a weak implicative UP-filter of \(A\).

Proof. It is straightforward by combining Theorem 5 in [5] and Theorem 2.

Definition 4. ([6], Definition 2) A subset \(F\) of a UP-algebra \(A\) is called a comparative UP-filter of \(A\) if it satisfies the following conditions

  • (F-1) \(0 \in F\),
  • (CF) \((\forall x, y, z \in A)((x \cdot ((y \cdot z)\cdot y) \in F \, \wedge \, x \in F) \, \Longrightarrow \, y \in F)\).

Remark 1.Let us note that condition (CF) is equivalent to the condition

  • (CF(5)) \( (\forall x,y,z \in A) (((y \cdot z)\cdot (x \cdot y) \in F \, \wedge \, x \in F)\, \Longrightarrow \, y \in F) \)
  • if the UP-algebra \( A \) satisfies the condition (5). It should also be noted that if the UP-algebra satisfies condition (5), then condition (3) can be replaced by condition
  • (3(5)) \((\forall x,y, z \in A)(x \cdot (y \cdot z) \in F \, \Longrightarrow \, x \cdot ((x \cdot y)\cdot (x \cdot z)) \in F)\).
It is known that any comparative UP-filter is a UP-filter and not viceversa (see Theorem 1 and Example 2 in [6]). We now consider relations between a weak implicative UP-filter and a comparative UP-filter. In the following example, we show that any weak implicative UP-filter may not be a comparative UP-filter.

Example 8. Let \( A = \{0,1,2,3\}\) be the UP-algebra which is given in Example 2. The subset \(\{0\}\) is a weak implicative UP-filter of \(A\) but it is not a comparative UP-filter of \(A\) since for \(x = 0\), \(y = 1\) and \(z = 2\) we have \(0 \cdot ((1 \cdot 2) \cdot 1) = 0 \cdot (1 \cdot 1) = 0 \cdot 0 = 0 \in \{0\}\) and \(0 \in \{0\}\) but \(1 \notin \{0\}\).

The following example shows that a comparative UP-filter is not a weak implicative UP filter.

Example 9. Let \( A = \{0,1,2,3\}\) be the UP-algebra which is given in Example 2. Then the set \(F := \{0,1,2\}\) is a comparative UP-filter of \(A\) (see Example 1 in [6]). But it is not a weak implicative UP-filter since \(0\cdot(1 \cdot 3) = 2 \in F\) and \(0\cdot 1 = 1 \in F\) but \(0 \cdot (0 \cdot 3) = 3\notin F\).

A UP-algebra \(A\) is said to be meet-commutative ([6], Definition 3) if it satisfies the condition

  • (7) \((\forall x, y \in A)((x \cdot y)\cdot y = (y \cdot x)\cdot x)\).
We provide conditions for a comparative UP-filter to be a weak implicative UP-filter.

Theorem 7. Let \(A\) be a meet-commutative UP-algebra which satisfies the condition (5). Then every comparative UP-filter is a weak implicative UP-filter.

Proof. It is straightforward by combining Theorem 3 in [6] and Theorem 2.

We end this section with the following theorem.

Theorem 8. The family \(\mathfrak{F}_{wi}(A)\) of all weak implicative UP-filters of \(A\) forms a complete lattice and \(\mathfrak{F}_{wi}(A) \, \subseteq \, \mathfrak{F}(A)\).

Proof. Let \(\{F_{k}\}_{k \in \Lambda}\) be a family of weak implicative UP-filters of \(A\) where \(\Lambda\) is index set. It is clear that \( 0 \in \bigcap_{k \in \Lambda}F_{k} \). Let \(x,y,z \in A\) be such that \(x \cdot (y \cdot z) \in \bigcap_{k \in \Lambda}F_{k}\) and \(x \cdot y \in \bigcap_{k \in \Lambda}F_{k}\). Then \(x \cdot (y \cdot z) \in F_{k}\) and \(x \cdot y \in F_{k}\) for any \(k \in \Lambda\). Thus \(x \cdot (x \cdot z) \in F_{k}\) for all \(k \in \Lambda\). Hence \(x \cdot (x \cdot z) \in \bigcap_{k \in \Lambda}F_{k}\). So, the intersection \(\bigcap_{k \in \Lambda}F_{k}\) satisfies the condition (WIF). Therefore \(\bigcap_{k \in \Lambda}F_{k}\) is a weak implicative UP-filter of \( A \). Let \(\mathfrak{ X }\) be the family of all weak implicative UP-filters containing the union \(\bigcup_{k \in \Lambda}F_{k}\). Then \(\cap \mathfrak{X}\) is a weak implicative UP-filter of \( A \) according to the first part of this proof. If we put \(\sqcap_{k \in \Lambda}F_{k} = \bigcap_{k \in \Lambda}F_{k}\) and \(\sqcup_{k \in \Lambda}F_{k} = \cap \mathfrak{X}\), then \((\mathfrak{F}_{wi}(A), \sqcap, \sqcup)\) is a complete lattice.

Corollary 2. Let \(X\) be a subset of \(A\). Then there exists the unique minimal weak implicative UP-filter of \(A\) containing \(X\).

Proof. Let \(\mathfrak{ X }\) be the family of all weak implicative UP-filters of \( A \) containing \( X \). It is clear that \(\cap\mathfrak{X}\) is a unique minimal weak implicative UP-filter containing \(X\) according to the second part of the proof of the previous theorem.

Corollary 3. Let \(a\) be an arbitrary element of \(A\). Then there is the unique minimal weak implicative UP-filter containing \(a\).

Proof. This follows directly from the proof of the previous Corollary if we take \( X = \{a\}\).

As we finish this paper, we ask the next question.

Question 1. (Extension property for weak implicative UP-filter) What are the necessary conditions for a UP-filter including a weak implicative UP-filter to be a weak implicative UP-filter?

4. Conclusion

The aim of this paper was to study the concept of weak implicative UP-filters of a UP-algebra. Additionally, some links have been established between this type of UP-filter of UP-algebras with other types of filters in these algebras such as implicative, comparative and allied filters. This work can be the basis for further and deeper research of the properties of UP-algebras.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflict of Interests

The authors declare no conflict of interest.

References

  1. Prabpayak, C., & Leerawat, U. (2009). On ideals and congruences in KU-algebras. Scienctia Magna Journal, 5(1), 54-57.[Google Scholor]
  2. Prabpayak, C., & Leerawat, U. (2009). On isomorphisms of KU-algebras. Scientia Magna Journal, 5(3), 25-31. [Google Scholor]
  3. Iampan, A. (2017). A new branch of the logical algebra: UP-algebras. Journal of Algebra and Related Topics, 5(1), 35-54. [Google Scholor]
  4. Somjanta, J., Thuekaew, N., Kumpeangkeaw, P., & Iampan, A. (2016). Fuzzy sets in UP-algebras. Annals of Fuzzy Mathematics and Informatics, 12(6), 739-756.[Google Scholor]
  5. Jun, Y. B., & Iampan, A. (2019). Implicative UP-filters. Afrika Matematika, 30(7-8), 1093-1101. [Google Scholor]
  6. Jun, Y. B., & Iampan, A. (2019). Comparative and Allied UP-Filters. Lobachevskii Journal of Mathematics, 40(1), 60-66. [Google Scholor]
  7. Jun, Y. B., & Iampan, A. (2019). Shift UP-filters and decompositions of UP-filters in UP-Algebras. Missouri Journal of Mathematical Sciences, 31(1), 36-45. [Google Scholor]
  8. Romano, D. A. (2018). Proper UP-filters in UP-algebra. Universal Journal of Mathematics and Applications, 1(2), 98-100.[Google Scholor]
  9. Romano, D. A. (2018). Some properties of proper UP-filters of UP-algebras. Fundamental Journal of Mathematics and Applications, 1(2), 109-111. [Google Scholor]
]]>
Homomorphism of intuitionistic fuzzy multigroups https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/homomorphism-of-intuitionistic-fuzzy-multigroups/ Mon, 30 Nov 2020 16:24:34 +0000 https://old.pisrt.org/?p=4733
OMS-Vol. 4 (2020), Issue 1, pp. 430 - 441 Open Access Full-Text PDF
I. M. Adamu
Abstract: This paper introduces the concept of homomorphism in intuitionistic fuzzy multigroups context. It also investigates Some homomorphic properties of intuitionistic fuzzy multigroups. It is shown that the homomorphic image and homomorphic preimage of intuitionistic fuzzy multigroups are also intuitionistic fuzzy multigroups. Finally, it presents some homomorphic properties of normalizer of intuitionistic fuzzy multigroups.
]]>

Open Journal of Mathematical Sciences

Homomorphism of intuitionistic fuzzy multigroups

I. M. Adamu
Department of Mathematics, Federal University Dutse, P.M.B. 7156, Dutse, Jigawa State, Nigeria.; idreesmuhammadadam@gmail.com

Abstract

This paper introduces the concept of homomorphism in intuitionistic fuzzy multigroups context. It also investigates Some homomorphic properties of intuitionistic fuzzy multigroups. It is shown that the homomorphic image and homomorphic preimage of intuitionistic fuzzy multigroups are also intuitionistic fuzzy multigroups. Finally, it presents some homomorphic properties of normalizer of intuitionistic fuzzy multigroups.

Keywords:

Intuitionistic fuzzy multiset, intuitionistic fuzzy multigroups, Homomorphism of intuitionistic fuzzy multigroups.

1. Introduction

In modern mathematics, a set is a well-defined collection of distinct objects. Set theory was introduced by German mathematician George Ferdinand Ludwig Cantor(1845-1918). In classical sense, all mathematical notions including sets must be exact. However, if repeated occurrences of any object are allowed in a set, then the mathematical structure is called multiset [1]. Thus, a multiset differs from a set in the sense that each element has a multiplicity. An account of the development of multiset theory can be seen in [2,3,4,5]. Most of the real life situations are complex and modelling them we need a simplification of the complex system. The simplification must be in such a way that the information lost should be minimum. One way to do this is to allow some degree of uncertainty into it. To handle situations like this, Zadeh [6] proposed fuzzy sets. A fuzzy set has a membership function assigns to each element of the universe of discourse, a number from the unit interval [0,1] to indicate the degree of belongingness to the set under consideration. Fuzzy sets were introduced with a view to reconcile mathematical modelling and human knowledge in the engineering sciences. The theory of fuzzy sets has been applied to group theoretic notions [7,8,9,10].

Atanassov [11,12] introduced a generalized fuzzy sets called intuitionistic fuzzy set. In the same time, a theory called intuitionistic fuzzy set theory was independently introduced by Takeuti and Titani [13] as a theory developed in (a kind of) intuitionistic logic. Intuitionistic Fuzzy sets provide a flexible framework to explain uncertainty and vagueness. The theory of intuitionistic fuzzy sets has been applied to group theoretic notions [14,15,16]. As a generalization of multiset, Yager [17] introduced fuzzy multisets and suggested possible applications to relational databases. Shinoj et al., [18] has studied the structure of groups in fuzzy multisets. Several researches on fuzzy multigroup theory have been conducted as seen in [19,20,21,22,23,24,25]. The concept of intuitionistic fuzzy multiset was proposed in [26] as a study of intuiionistic fuzzy sets in multiset framework. Some works have been done on both the theory and applications of intuitionistic fuzzy multisets [27,28,29,30,31,32,33,34]. In a way to apply intuitionistic fuzzy multisets to group theory, Shinoj and John [35] proposed intuitionistic fuzzy multigroups. Adamu et al., [36] developed the concept of normal sub-intuitionistic fuzzy multigroups and investigate some of its related algebraic structures.

The motivation of this work is to establish the idea of homomorphism in intuitionistic fuzzy multigroups. This paper introduces the concept of homomorphism in intuitionistic fuzzy multigroups context and investigated some of its properties. The outline are presented as follows: Section 2 presents some foundational concepts relevant to the study whereas the main results are reported in Section 3. Section 4 summarises and concludes the paper.

2. Preliminaries

In this section we presents some existing definitions and results to be used in the sequel. Throughout the work IFMS(X) denotes the set of all intuitionistic fuzzy multisets of \(X\), and IFMG(X) denote the set of all intuitionistic fuzzy multigroups of \(X\), where \(X\) is a non-empty set.

Definition 1. [26] Let \(X\) be a nonempty set. An intuitionistic fuzzy multiset \(A\) of \(X\) is characterized by two count membership function \(CM_A\) and count non membership function \(CN_A\) defined by \[CM_A\colon X\to Q \, \textrm{and}\, CN_A\colon X\to Q,\] where \(Q\) is the set of all crisp multisets drawn from the unit interval \([0,1]\) such that for each \(x\in X\), the membership sequence is defined as a decreasingly ordered sequence of elements in \(A\) which is denoted by \(\mu^1_A (x),\mu^2_A (x),...,\mu^p_A (x)\), where \(\mu^1_A (x)\geq\mu^2_A (x)\geq,...,\geq\mu^p_A (x)\) and the corresponding non membership sequence of elements in \(A\) is denoted by \((\nu^1_A(x),\nu^2_A(x),...,\nu^p_A(x))\) such that \(0\leq\mu^i_A (x)+nu^i_A(x)\leq 1 \) for every \(\in X\) and \(i=1,2,...,p\).

An IFMS \(A\) is denoted by

\[A=\lbrace (\mu^1_A (x),\mu^2_A (x),...,\mu^p_A (x)),(\nu^1_A(x),\nu^2_A(x),...,\nu^p_A(x))>: x\in X\rbrace.\]

Definition 2. [26] Length of an element \(x\) in an IFMS \(A\) is defined as the cardinality of \(CM_A(x)\) or \(CN_A(x)\) for which \(0\leq\mu^i_A (x)+nu^i_A(x)\leq1\) and is denoted by \(L(x:A)\). That is \(L(x:A)=|CM_A(x)|=|CN_A(x)|\). If \(A\) and \(B\) are IFMSs drawn from \(X\), then \(L(x:A,B)=\max[L(x:A),L(x:B)]\). Alternatively we use \(L(x)\) for \(L(x:A,B)\).

Definition 3. [26] For any two IFMSs \(A\) and \(B\) of a set \(X\), the following operations and relations hold.

  • (i) Inclusion: \(A\subset B\) \(\Longleftrightarrow\) \(\mu^j_A(x)\leq\mu^j_B(x)\) and \(\nu^j_A(x)\ge \nu^j_B(x)\), for \(j=1,2,...,L(x)\), \(\forall x\in X\).
  • (ii) Complement: \[\overline{A}=\lbrace (\nu^1_A(x),\nu^2_A(x),...,\nu^p_A(x)),(\mu^1_A (x),\mu^2_A (x),...,\mu^p_A (x))>: x\in X\rbrace.\]
  • (iii) Union: In \(A\cup B\), the membership and non-membership values are obtained as follows: \[\mu^j_{A\cup B} (x)= \mu^j_A (x)\vee\mu^j_B (x) \ \text{and} \ \nu^j_{A\cup B} (x)= \nu^j_A (x)\wedge\nu^j_B (x),\] \(\text{for} \ j=1,2,...,L(x), \forall x\in X\).
  • (iv) Intersection: In \(A\cap B\), the membership and non-membership values are obtained as follows: \[\mu^j_{A\cap B} (x)= \mu^j_A (x)\wedge\mu^j_B (x) \ \text{and} \ \nu^j_{A\cap B} (x)= \nu^j_A (x)\vee\nu^j_B (x),\] for \(j=1,2,...,L(x), \forall x\in X\).

Definition 4. [28] Let \(X\) and \(Y\) be two non-empty sets and \(f\colon X\to Y\) be a mapping. Then

  • (i) the image of \(A\in FMS(X)\) under the mapping \(f\) is an IFMS of \(Y\) denoted by \(f(A)\), where \[ CM_{f(A)}(y)=\left\{ \begin{array}{rcl} \bigvee_{f(x)=y}CM_A (x), & f^{-1}(y) \ne \emptyset \\ 0, & otherwise. \\ \end{array}\right.\\\] Similarly, \[ CN_{f(A)}(y)=\left\{ \begin{array}{rcl} \bigwedge_{f(x)=y}CN_A (x), & f^{-1}(y) \ne \emptyset \\ 0, & otherwise. \\ \end{array}\right.\\\]
  • (ii) the inverse image of \(B\in FM(Y)\) under the mapping \(f\) is an IFMS of \(X\) denoted by \(f^{-1}(B)\), where \(CM_{f^{-1}(B)}(x)=CM_B(f(x))\) and \(CN_{f^{-1}(B)}(x)=CN_B(f(x))\).

Definition 5. [35] Let \(X\) be a group. An intuitionistic fuzzy multiset \(G\) of \(X\) is an intuitionistic fuzzy multigroup (IFMG) of \(X\) if the counts (count membership and non-membership) of \(G\) satisfies the following two conditions:

  • (i) \(CM_G (xy) \geq CM_G (x)\wedge CM_G (y)\) \(\forall x,y \in X\) and \(CN_G (xy) \leq CN_G (x)\vee CN_G (y)\) \(\forall x,y \in X\),
  • (ii) \(CM_G (x^{-1} )\geq CM_G (x)\) \(\forall x \in X\) and \(CN_G (x^{-1} )\leq CN_G (x)\) \(\forall x \in X\).

Definition 6. [35] For any intuitionistic fuzzy multigroup \(A \in IFMG(X), \exists\) its inverse, \(A^{-1},\) defined by \[CM_{A^{-1}}(x) = CM_A(x^{-1}) \forall x \in X \, \textrm{and}\, CN_{A^{-1}}(x) = CN_A(x^{-1}) \forall x \in X.\] Certainly, \(A \in IFMG(X)\) if and only if \(A^{-1} \in IFMG(X)\).

3. Main results

In this section, we introduce homomorphism of IFMSs and characterize it properties with some results.

Definition 7. Let \(X\), \(Y\) be two groups and let \(f\colon X\to Y\) be an isomorphism of groups. Suppose \(A\) and \(B\) are intuitionistic fuzzy multigroups of \(X\) and \(Y\), respectively. Then, \(f\) induces a homomorphism from \(A\) to \(B\) which satisfies

  • (i) \(CM_{A}(f^{-1}(y_1y_2)) \geq CM_{A} (f^{-1}(y_1))\wedge CM_{A} (f^{-1}(y_2))\) and \(CN_{A}(f^{-1}(y_1y_2)) \leq CN_{A} (f^{-1}(y_1))\vee CN_{A} (f^{-1}(y_2))\) \(\forall y_1, y_2 \in Y\),
  • (ii) \(CM_{B} (f(x_1x_2)) \geq CM_{B} (f(x_1))\wedge CM_{B} (f(x_2))\) and \(CN_{B} (f(x_1x_2)) \leq CN_{B} (f(x_1))\vee CN_{B} (f(x_2))\) \(\forall x_1, x_2 \in X\),
where \(f(A)\) and \(f^{-1}(B)\) are as in Definition 4.

Definition 8. Let \(X\) and \(Y\) be groups and let \(A\in IFMG(X)\) and \(B \in IFMG(Y )\), respectively.

  • (i) A homomorphism \(f\) of \(X\) onto \(Y\) is called a weak homomorphism of \(A\) into \(B\) if \(f(A) \subseteq B\). If \(f\) is a weak homomorphism of \(A\) into \(B\), then we say that, \(A\) is weakly homomorphic to \(B\) denoted by \(A \sim B\).
  • (ii) An isomorphism \(f\) of \(X\) onto \(Y\) is called a weak isomorphism of \(A\) into \(B\) if \(f(A) \subseteq B\). If f is a weak isomorphism of \(A\) into \(B\), then we say that, \(A\) is weakly isomorphic to \(B\) denoted by \(A\simeq B\).
  • (iii) A homomorphism \(f\) of \(X\) onto \(Y\) is called a homomorphism of \(A\) onto \(B\) if \(f(A) = B\). If \(f\) is a homomorphism of \(A\) onto \(B\), then \(A\) is homomorphic to \(B\) denoted by \(A \approx B\).
  • (iv) An isomorphism \(f\) of \(X\) onto \(Y\) is called an isomorphism of \(A\) onto \(B\) if \(f(A) = B\). If f is an isomorphism of \(A\) onto \(B\), then \(A\) is isomorphic to \(B\) denoted by \(A\approxeq B\).

Remark 1. Let \(X\) and \(Y\) be groups and let \(A\in IFMG(X)\) and \(B \in IFMG(Y )\), respectively. Then

  • (i) a homomorphism \(f\) of \(X\) onto \(Y\) is called an epimorphism of \(A\) onto \(B\) if \(f\) is surjective.
  • (ii) a homomorphism \(f\) of \(X\) onto \(Y\) is called a monomorphism of \(A\) into \(B\) if \(f\) is injective.
  • (iii) a homomorphism \(f\) of \(X\) onto \(Y\) is called an endomorphism of \(A\) onto \(A\) if \(f\) is a map to itself.
  • (iv) a homomorphism \(f\) of \(X\) onto \(Y\) is called an automorphism of \(A\) onto \(A\) if \(f\) is both injective and surjective, that is, bijective.
  • (v) a homomorphism \(f\) of \(X\) onto \(Y\) is called an isomorphism of \(A\) onto \(B\) if \(f\) is both injective and surjective, that is, bijective.

Definition 9. Let \(A\) be a intuitionistic fuzzy submultigroup of \(B\in IFMG(X)\). Then, the normalizer of \(A\) in \(B\) is given by \[N(A) =\{g \in X \mid CM_A(gy) = CM_A(yg), CN_A(gy) = CN_A(yg)\ \forall y \in X \}.\]

Proposition 1. Let \(f\colon X\to Y\) be a homomorphism. For \(A,B\in IFMG(X)\), if \(A\subseteq B\), then \(f(A)\subseteq f(B)\).

Proof. Let \(A,B \in IFMG(X)\) and \(f: X\rightarrow Y\). Suppose \(CM_{A}(x) \leq CM_B(x)\) and \(CN_{A}(x) \leq CN_B(x)\ \forall \;x\in X.\) Then it follows that \[CM_{f(A)}(y) = CM_A(f^{-1}(y)) \leq CM_B(f^{-1}(y))=CM_{f(B)}(y),\] and \[CN_{f(A)}(y) = CN_A(f^{-1}(y)) \leq CN_B(f^{-1}(y))=CN_{f(B)}(y)\ \forall\; y\in Y.\] Hence \(f(A)\subseteq f(B)\).

Proposition 2. Let \(X\), \(Y\) be two groups and \(f\) be a homomorphism of \(X\) into \(Y\) for \(A, B\ IFMG(Y),\) if \(A\subseteq B,\) then \(f^{-1}(A)\subseteq f^{-1}(B)\).

Proof. Given that \(A,B \in IFMG(X)\) and \(f: X\rightarrow Y.\) Suppose \(CM_{A}(y) \leq CM_B(y)\) and \(CN_{A}(y) \leq CN_B(y)\ \forall \ y\in Y\). Then we have \[CM_{f^{-1}(A)}(x)=CM_A(f(x))\leq CM_B(f(x))=CM_{f^{-1}(B)}(x),\] Similarly, \[CN_{f^{-1}(A)}(x)=CN_A(f(x))\leq CN_B(f(x))=CN_{f^{-1}(B)}(x) \forall x\in X.\]

Definition 10. Let \(f\) be a homomorphism of a group \(X\) into a group \(Y\) , and \(A \in IFMG(X)\). If for all \(x,y \in X\), \(f(x) = f(y)\) implies \(CM_A(x) = CM_A(y)\) and \(CN_A(x) = CN_A(y)\) then, \(A\) is \(f-\)invariant.

Lemma 1. Let \(f:X\rightarrow Y\) be groups homomorphism and \(A \in IFMG(X)\). If \( \forall x \in X, f(x) = f(y)\), then, \(A\) is \(f-\)invariant.

Proof. Suppose \(f(x) = f(y) \forall x \in X\). Then, \[CM_{f(A)}(f(x)) = CM_{f(A)}(f(y)) \ \text{and}\ CN_{f(A)}(f(x)) = CN_{f(A)}(f(y)).\] This implies \(CM_A(x) = CM_A(y)\) and \(CN_A(x) = CN_A(y)\). Hence, \(A\) is \(f-\)invariant.

Lemma 2. If \(f:X\rightarrow Y\) is a homomorphism and \(A \in IFMG(X)\). Then

  • (i) \( f(A^{-1}) = (f(A))^{-1}\).
  • (ii) \(f^{-1}(f(A^{-1}))=f((f(A))^{-1})\).

Proof.

  • (i) Let \(y \in Y\). Then, we get \begin{eqnarray*} CM_{f(A^{-1})}(y) & = & CM_{A^{-1}}(f^{-1}(y))=CM_A(f^{-1}(y))\\ & = & CM_{f(A)}(y)=CM_{(f(A))^{-1}}(y) \forall y \in Y. \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{f(A^{-1})}(y)& = & CN_{A^{-1}}(f^{-1}(y))=CN_A(f^{-1}(y))\\ & = & CN_{f(A)}(y)=CN_{(f(A))^{-1}}(y) \forall y \in Y. \end{eqnarray*} Hence \(f(A^{-1}) = (f(A))^{-1}\).
  • (ii) Let \(y \in Y\) . Then, we get \begin{eqnarray*} CM_{f^{-1}(f(A^{-1}))}(y)& = & CM_{f(A^{-1})}(f(y))=CM_{A^{-1}}(f((f^{-1}(y)))\\ & = & CM_Af((f^{-1}(y))=CM_{f^{-1}(f(A))}(y)\\ & = & CM_{f((f(A))^{-1})}(y) \forall y \in Y. \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{f^{-1}(f(A^{-1}))}(y) & = & CN_{f(A^{-1})}(f(y))=CN_{A^{-1}}(f((f^{-1}(y)))\\ & = & CN_Af((f^{-1}(y))=CN_{f^{-1}(f(A))}(y)\\ & = & CN_{f((f(A))^{-1})}(y) \forall y \in Y. \end{eqnarray*} Hence \( f^{-1}(f(A^{-1}))=f((f(A))^{-1})\).

Proposition 3. Let \(X\) and \(Y\) be groups such that \(f:X\rightarrow Y\) is an isomorphic mapping. If \(A\in IFMG(X)\) and \( B\in IFMG(Y)\). Then

  • (i) \((f^{-1}(B))^{-1}=f^{-1}(B^{-1})\).
  • (ii) \(f^{-1}(f(A)) =f^{-1}(f(f^{-1}(B)))\).

Proof. Recall that if \(f\) is an isomorphism, then \(f(x)=y\) \(\forall y \in Y\), consequently, \(f(A)=B\).

  • (i) \begin{eqnarray*} CM_{(f^{-1}(B))^{-1}}(x)& = & CM_{f^{-1}(B)}(x^{-1})=CM_{f^{-1}(B)}(x)\\ & = & CM_B(f(x))=CM_{B^{-1}}((f(x))^{-1})\\ & = & CM_{B^{-1}}(f(x))=CM_{f^{-1}(B^{-1})}(x). \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{(f^{-1}(B))^{-1}}(x)& = & CN_{f^{-1}(B)}(x^{-1})=CN_{f^{-1}(B)}(x)\\ & = & CN_B(f(x))=CN_{B^{-1}}((f(x))^{-1})\\ & = & CN_{B^{-1}}(f(x))=CN_{f^{-1}(B^{-1})}(x). \end{eqnarray*} Hence, \((f^{-1}(B))^{-1}=f^{-}(B^{-1})\).
  • (ii) Similar to (i).

Proposition 4. Let \(f:X\rightarrow Y\) be a homomorphism of groups. If \(\{A_i\}_{i\in I} \in IFMG(X) \text{and} \{B_i\}_{i\in I} \in IFMG(Y) \) respectively. Then

  • (i) \(f(\bigcup_{i\in I} A_i)=\bigcup_{i\in I}f(A_i)\).
  • (ii) \(f(\bigcap_{i\in I} A_i)=\bigcap_{i\in I}f(A_i)\).
  • (iii) \(f^{-1}(\bigcap_{i\in I} B_i)=\bigcap_{i\in I}f^{-1}(B_i)\)
  • (iv) \(f^{-1}(\bigcup_{i\in I} B_i)=\bigcup_{i\in I}f^{-1}(B_i)\)

Proof.

  • (i) Let \(x \in X\) and \(y \in Y.\) Since \(f\) is a homomorphism, so \(f(x) = y.\) Then, we have \begin{eqnarray*} CM_{f(\bigcup_{i\in I} A_i)}(y) & = & CM_{\bigcup_{i\in I} A_i}(f^{-1}(y))\\ & = & \bigvee_{i \in I}CM_{ A_i}(f^{-1}(y))\\ & = & \bigvee_{i \in I}CM_{f( A_i)}(y)\\ & = & CM_{\bigcup_{i\in I} f(A_i)}(y) \; \forall y \in Y \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{f(\bigcap_{i\in I} A_i)}(y) & = & CN_{\bigcap_{i\in I} A_i}(f^{-1}(y))\\ & = & \bigwedge_{i \in I}CN_{ A_i}(f^{-1}(y))\\ & = & \bigwedge_{i \in I}CN_{f( A_i)}(y)\\ & = & CN_{\bigcap_{i\in I} f(A_i)}(y) \; \forall y \in Y. \end{eqnarray*} Hence, \(f(\bigcup_{i\in I} A_i)=\bigcup_{i\in I}f(A_i)\). The proofs of (ii)-(iv) are similar to (i).

Proposition 5. Let \(X\) be group and \(f\colon X\to Y\) be an automorphism. If \(A \in IFMG(X)\), then \(f(A) = A\) if and only if \(f^{-1}(A) = A\). Consequently, \( f(A) = f^{-1}(A)\).

Proof. Let \(f(x) = x\; \forall x \in X\) since \(f\) is an automorphism. Suppose \(f(A) = A,\) we have \begin{eqnarray*} CM_{f(A)}(x) & = & CM_A(f^{-1}(x)) = CM_A(x)\\ & = & CM_A(f^{-1}(x)) = CM_{f(A)}(x). \end{eqnarray*} Similarly,\begin{eqnarray*} CN_{f(A)}(x) & = & CN_A(f^{-1}(x)) = CN_A(x)\\ & = & CN_A(f^{-1}(x)) = CN_{f(A)}(x). \end{eqnarray*} Thus \(f^{-1}(A) = A\).

Conversely, let \(f^{-1}(A) = A,\) we have

\begin{eqnarray*} CM_{f^{-1}(A)}(x) & = & CM_A(f(x)) = CM_A(x)\\ & = & CM_A(f^{-1}(x)) = CM_{f(A)}(x). \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{f^{-1}(A)}(x) & = & CN_A(f(x)) = CN_A(x)\\ & = & CN_A(f^{-1}(x)) = CN_{f(A)}(x). \end{eqnarray*} Thus \(f(A) = A\). Hence \( f(A) = A \Leftrightarrow f^{-1}(A) = A\).

Proposition 6. Let \(f\colon X\to Y\) be a homomorphism. If \(A \in IFMG(X)\), then \(f^{-1}(f(A)) = A\) whenever \(f\) is injective.

Proof. Suppose \(f\) is injective, then \(f(x) = y\; \forall x\in X\) and \(\forall y \in Y\). Now \begin{eqnarray*} CM_{f^{-1}(f(A))}(x) & = & CM_{f(A)}(f(x)) = CM_{f(A)}(y)\\ & = & CM_A(f^{-1}(y)) = CM_A(x). \end{eqnarray*} Also, \begin{eqnarray*} CN_{f^{-1}(f(A))}(x) & = & CN_{f(A)}(f(x)) = CN_{f(A)}(y)\\ & = & CN_A(f^{-1}(y)) = CN_A(x). \end{eqnarray*} Hence, \(f^{-1}(f(A)) = A.\)

Corollary 1. Let \(f\colon X\to Y\) be a homomorphism. If \(B \in IFMG(Y )\), then \(f(f^{-1}(B)) = B\) whenever \(f\) is surjective.

Proof. Similar to Proposition 6.

Proposition 7. Let \(X\), \(Y\) and \(Z\) be groups and \(f\colon X\to Y\) and \(f\colon Y\to Z\) be homomorphisms. If \(\lbrace A_i\rbrace_{i\in I} \in IFMG(X)\) and \(\lbrace B_i\rbrace_{i\in I} \in IFMG(Y)\) and \(i \in I\). Then

  • (i) \(f(A_i) \subseteq B_i \implies A_i \subseteq f^{-1}(B_i)\).
  • (ii) \(g[f(A_i)] = [gf](A_i)\).
  • (iii) \(f^{-1}[g^{-1}(B_i)] = [gf]^{-1}(Bi)\).

Proof.

  • (i) The proof of (i) is trivial.
  • (ii) Since \(f\) and \(g\) are homomorphisms, then, \(f(x) = y\) and \(g(y) = z\) \(\forall x \in X,\; \forall y \in Y\) and \(\forall z \in Z\), respectively. Now \begin{eqnarray*} CM_{g[f(A_i)]}(z) & = & CM_{f(A_i)}(g^{-1}(z)) = CM_{f(A_i)}(y)\\ & = & CM_{A_i}(f^{-1}(y)) = CM_{A_i}(x). \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{g[f(A_i)]}(z) & = & CN_{f(A_i)}(g^{-1}(z)) = CN_{f(A_i)}(y)\\ & = & CN_{A_i}(f^{-1}(y)) = CN_{A_i}(x). \end{eqnarray*} Also, \begin{eqnarray*} CM_{[gf](A_i)}(z) & = & CM_{g(f(A_i))}(z) = CM_{f(A_i)}(g^{-1}(z))\\ & = & CM_{f(A_i)}(y) = CM_{A_i}(f^{-1}(y))\\ & = & CM_{A_i}(x)\, \forall x \in X. \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{[gf](A_i)}(z) & = & CN_{g(f(A_i))}(z) = CN_{f(A_i)}(g^{-1}(z))\\ & = & CN_{f(A_i)}(y) = CN_{A_i}(f^{-1}(y))\\ & = & CN_{A_i}(x)\, \forall x \in X. \end{eqnarray*} Hence \(g[f(A_i)] = [gf](A_i)\).
  • (iii) Similar to (ii).

Proposition 8. Let \(X\) and \(Y\) be groups and \(f\colon X\to Y\) be an isomorphism. Then, \(A \in IFMG(X)\) if and only if \(f(A)\in IFMG(Y)\).

Proof. Suppose \(A \in IFMG(X).\) Let \(x, y \in Y,\) then, \( \exists f(a) = x\) and \(f(b) = y\) since \(f\) is an isomorphism \(\forall a, b \in X\). We know that \[CM_B(x) = CM_A(f^{-1}(x)) = \bigvee_{a \in f^{-1}(x)}CM_A(a)\] and \[CN_B(x) = CN_A(f^{-1}(x)) = \bigwedge_{a \in f^{-1}(x)}CN_A(a).\] Also, \[CM_B(y) = CM_A(f^{-1}(y))= \bigvee_{a \in f^{-1}(y)}CM_A(b),\] and \[CN_B(y) = CN_A(f^{-1}(y))= \bigwedge_{a \in f^{-1}(y)}CN_A(b).\] Clearly, \(a \in f^{-1}(x)\neq \emptyset\) and \(b \in f^{-1}(y)\neq \emptyset.\) For \(a \in f^{-1}(x)\) and \(b \in f^{-1}(y)\) \(\implies x = f(a)\) and \(y = f(b)\). Thus, \[f(ab^{-1}) = f(a)f(b^{-1}) = f(a)(f(b))^{-1} = xy^{-1}.\] Let \(c = ab^{-1}\, \implies c \in f^{-1}(xy^{-1})\).

Now,

\begin{eqnarray*} CM_B(xy^{-1}) & = & \bigvee_{c \in f^{-1}(xy^{-1)}} CM_A(c)= CM_A(ab^{-1})\\ & \geq & CM_A(a) \wedge CM_A(b)= CM_{f^{-1}(B)}(a) \wedge CM_{f^{-1}(B)}(b)\\ & = & CM_B(f(a)) \wedge CM_B(f(b))\\ & = & CM_B(x) \wedge CM_B(y)\; \forall x, y \in Y. \end{eqnarray*} Similarly, \begin{eqnarray*} CN_B(xy^{-1}) & = & \bigwedge_{c \in f^{-1}(xy^{-1)}} CN_A(c)= CN_A(ab^{-1})\\ & \leq & CN_A(a) \vee CN_A(b)= CN_{f^{-1}(B)}(a) \vee CN_{f^{-1}(B)}(b)\\ & = & CN_B(f(a)) \vee CN_B(f(b))\\ & = & CN_B(x) \vee CN_B(y)\; \forall x, y \in Y. \end{eqnarray*} Hence, \(f(A) \in IFMG(Y)\).

Conversely, let \(a, b \in X\) and suppose \(f(A) \in IFMG(Y)\). Then,

\begin{eqnarray*} CM_A(ab^{-1}) &= & CM_{f^{-1}(B)}(ab^{-1})= CM_B(f(ab^{-1}))\\ & = & CM_B(f(a)f(b^{-1}))= CM_B(f(a)(f(b))^{-1})\\ & \geq & CM_B(f(a)) \wedge CM_B(f(b))\\ & = & CM_{f^{-1}(B)}(a) \wedge CM_{f^{-1}(B)}(b)\\ & = & CM_A(a) \wedge CM_A(b). \end{eqnarray*} Similarly, \begin{eqnarray*} CN_A(ab^{-1}) &= & CN_{f^{-1}(B)}(ab^{-1})= CN_B(f(ab^{-1}))\\ & = & CN_B(f(a)f(b^{-1}))= CN_B(f(a)(f(b))^{-1})\\ & \leq & CN_B(f(a)) \vee CN_B(f(b))\\ & = & CN_{f^{-1}(B)}(a) \vee CN_{f^{-1}(B)}(b)\\ & = & CN_A(a) \vee CN_A(b). \end{eqnarray*} Hence, \(A \in IFMG(X)\).

Proposition 9. Let \(X\) and \(Y\) be groups and \(f\colon X\to Y\) be an isomorphism. Then, \(B \in IFMG(X)\) if and only if \(f^{-1}(B) \in IFMG(Y)\).

Proof. Suppose \(B \in IFMG(Y)\). Since \(f^{-1}(B)\) is an inverse image of \(B\), then we get \begin{eqnarray*} CM_{f^{-1}(B)}(ab^{-1}) & = & CM_B(f(ab^{-1}))= CM_B(f(a)f(b^{-1}))\\ & = & CM_B(f(a)(f(b))^{-1})\geq CM_B(f(a)) \wedge CM_B(f(b))\\ & = & CM_{f^{-1}(B)}(a) \wedge CM_{f^{_1}(B)}(b), \end{eqnarray*} and \begin{eqnarray*} CN_{f^{-1}(B)}(ab^{-1}) & = & CN_B(f(ab^{-1}))= CN_B(f(a)f(b^{-1}))\\ & = & CN_B(f(a)(f(b))^{-1})\leq CM_B(f(a)) \vee CN_B(f(b))\\ & = & CN_{f^{-1}(B)}(a) \vee CN_{f^{_1}(B)}(b)\; a,b\in X. \end{eqnarray*} Hence \(f^{-1}(B) \in IFMG(X)\).

Conversely, suppose \(f^{-1}(B) \in IFMG(X).\) We get

\begin{eqnarray*} CM_B(xy^{-1}) & = & CM_{f(A)}(xy^{-1})= CM_A(f^{-1}(xy^{-1}))\\ & = & CM_A(f^{-1}(x)f^{-1}(y^{-1}))= CM_A(f^{-1}(x)(f^{-1}(y))^{-1})\\ & \geq & CM_A(f^{-1}(x)) \wedge CM_A(f^{-1}(y))= CM_{f(A)}(x) \wedge CM_{f(A)}(y)\\ & = & CM_B(x) \wedge CM_B(y), \end{eqnarray*} and \begin{eqnarray*} CN_B(xy^{-1}) & = & CN_{f(A)}(xy^{-1})= CN_A(f^{-1}(xy^{-1}))\\ & = & CN_A(f^{-1}(x)f^{-1}(y^{-1}))= CN_A(f^{-1}(x)(f^{-1}(y))^{-1})\\ & \leq & CN_A(f^{-1}(x)) \vee CN_A(f^{-1}(y))= CN_{f(A)}(x) \vee CN_{f(A)}(y)\\ & = & CN_B(x) \vee CN_B(y) \; \forall x,y\in Y. \end{eqnarray*} Hence, \(B \in IFMG(Y)\).

Corollary 2. Let \(X\) and \(Y\) be groups and \(f:X\rightarrow Y\) be an isomorphism. Then, the following statements hold;

  • (i) \(A^{-1}\in IFMG(X)\) and if and only if \(f(A^{-1})\in IFMG(Y)\).
  • (ii) \(B^{-1}\in IFMG(Y)\) and if and only if \(f^{-1}(B^{-1})\in IFMG(X)\).

Proof. Straightforward from Propositions 8 and 9.

Corollary 3. Let \(X\) and \(Y\) be groups and \(f:X\rightarrow Y\) be an isomorphism. If \(\bigcap_{i\in I} A_i\in IFMG(X)\) and \(\bigcap_{i\in I} B_i\in IFMG(Y)\). Then,

  • (i) \(f(\bigcap_{i\in I} A_i) \in IFMG(Y)\).
  • (ii) \(f^{-1}(\bigcap_{i\in I} B_i) \in IFMG(X)\).

Proof. Straightforward from Propositions 8 and 9.

Corollary 4. Let \(X\) and \(Y\) be groups and \(f:X\rightarrow Y\) be an isomorphism. If \(\bigcup_{i\in I} A_i\in IFMG(X)\) and \(\bigcup_{i\in I} B_i\in IFMG(Y)\). Then,

  • (i) \(f(\bigcup_{i\in I} A_i) \in IFMG(Y)\).
  • (ii) \(f^{-1}(\bigcup_{i\in I} B_i) \in IFMG(X)\).

Proof. Straightforward from Propositions 3.

Proposition 10. Let \(f\) be a homomorphism of an abelian group \(X\) onto an abelian group \(Y\). Let \(A\) and \(B\) be intuitionistic fuzzy multigroups of \(X\) such that \(A \subseteq B\). Then, \(f(N(A))\subseteq N(f(A))\).

Proof. Let \(\in f(N(A))\). Then, \(\exists u \in N(A)\) such that \(f(u) = x\). For all \(y, z \in Y\), we have \begin{eqnarray*} CM_{f(A)}(xyx^{-1}) & = & CM_{A(f^{-1}}(xyx^{-1}))\\ & = & CM_A(f^{-1}(x)f^{-1}(y)f^{-1}(x^{-1}))\\ & = & CM_A(f^{-1}(x)f^{-1}(y)f^{-1}(x)^{-1})\\ & = & CM_A(f^{-1}(x)f^{-1}(y)(f^{-1}(x))^{-1})\\ & = & CM_A(f^{-1}(f(u))f^{-1}(f(v))(f^{-1}(f(u)))^{-1})\\ & = & CM_A(uvu^{-1})= CM_A(vuu^{-1})= CM_A(v)\\ & = & CM_A(f^{-1}(y)) = CM_{f(A)}(y), \end{eqnarray*} and similarly,\begin{eqnarray*} CN_{f(A)}(xyx^{-1}) & = & CN_{A(f^{-1}}(xyx^{-1}))\\ & = & CN_A(f^{-1}(x)f^{-1}(y)f^{-1}(x^{-1}))\\ & = & CN_A(f^{-1}(x)f^{-1}(y)f^{-1}(x)^{-1})\\ & = & CN_A(f^{-1}(x)f^{-1}(y)(f^{-1}(x))^{-1})\\ & = & CN_A(f^{-1}(f(u))f^{-1}(f(v))(f^{-1}(f(u)))^{-1})\\ & = & CN_A(uvu^{-1})=CN_A(vuu^{-1})= CN_A(v) \\ & = & CN_A(f^{-1}(y)) = CN_{f(A)}(y), \end{eqnarray*} where \(v \in X\) such that \(f(v) = y\). Thus, \(x \in N(f(A))\). Hence \(f(N(A))\subseteq N(f(A))\).

Proposition 11. Let \(f\colon X\to Y\) be a homomorphism of abelian groups \(X\) and \(Y\) . Let \(A\) and \(B\) be intuitionistic fuzzy multigroups of \(Y\) such that \(B \subseteq A.\) Then, \(f^{-1}(N(B)) = N(f^{-1}(B))\).

Proof. Let \(x \in f^{-1}(N(B))\). Then for all \(y \in X\), \begin{eqnarray*} CM_{f^{-1}(B)}(xyx^{-1}) & = & CM_B(f(xyx^{-1}))= CM_B(f(x)f(y)f(x^{-1}))\\ & = & CM_B(f(x)f(y)(f(x))^{-1})= CM_B(f(y)f(x)(f(x))^{-1})\\ & = & CM_B(f(y)) = CM_{f^{-1}(B)}(y). \end{eqnarray*} Similarly,\begin{eqnarray*} CN_{f^{-1}(B)}(xyx^{-1}) & = & CN_B(f(xyx^{-1}))= CN_B(f(x)f(y)f(x^{-1}))\\ & = & CN_B(f(x)f(y)(f(x))^{-1})= CN_B(f(y)f(x)(f(x))^{-1})\\ & = & CN_B(f(y)) = CN_{f^{-1}(B)}(y). \end{eqnarray*} Thus \(x \in N(f^{-1}(B))\). So \(f^{-1}(N(B)) \subseteq N(f^{-1}(B))\). Again, let \(x \in N(f^{-1}(B))\) and \(f(x) = u\). Then for all \(v \in Y\), \begin{eqnarray*} CM_B(uvu^{-1}) & = & CM_B(f(x)f(y)(f(x))^{-1})= CM_B(f(y)f(x)(f(x))^{-1})\\ & = & CM_B(f(y)) = C_B(v), \end{eqnarray*} and \begin{eqnarray*} CN_B(uvu^{-1}) & = & CN_B(f(x)f(y)(f(x))^{-1})= CN_B(f(y)f(x)(f(x))^{-1})\\ & = & CN_B(f(y)) = CN_B(v), \end{eqnarray*} where \(y \in X\) such that \(f(y) = v\). Clearly, \(u\in N(B)\), that is, \(x \in f^{-1}(N(B))\). Thus \(N(f^{-1}(B)) \subseteq f^{-1}(N(B))\). Hence \(f^{-1}(N(B)) = N(f^{-1}(B))\).

Proposition 12. Let \(f:X\rightarrow Y\) be an isomorphism and let \(A\) be a normal sub-intuitionistic fuzzy multigroup of \(B\in IFMG(X)\). Then, \(f(A)\) is a normal sub-intuitionistic fuzzy multigroup of \(f(B) \in IFMG(Y)\).

Proof. By Proposition 8, \(f(A), f(B)\in IFMG(Y)\) and so, \(f(A) \subseteq f(B)\). We show that \(f(A)\) is a normal sub-intuitionistic fuzzy multigroup of \(f(B)\). Let \(x, y \in Y\). Since \(f\) is an isomorphism, then for some \(a \in X\) we have \(f(a) = x\). Thus, \begin{eqnarray*} CM_{f(A)}(xyx^{-1}) & = & CM_A(b)\ \text{for}\ f(b) = xyx^{-1}, \forall b \in X\\ & = & CM_A(a^{-1}ba)\ \text{for} \ f(a^{-1}ba) = y\\ & \geq & CM_A(b)\ \text{for}\ f(b) = y, \forall a^{-1}ba \in X\\ & = & CM_A(f^{-1}(y))\ \text{for}\ f(b) = y\\ & = & CM_{f(A)}(y). \end{eqnarray*} Similarly, \begin{eqnarray*} CN_{f(A)}(xyx^{-1}) & = & CN_A(b)\ \text{for}\ f(b) = xyx^{-1}, \forall b \in X\\ & = & CN_A(a^{-1}ba)\ \text{for} \ f(a^{-1}ba) = y\\ & \leq & CN_A(b)\ \text{for}\ f(b) = y, \forall a^{-1}ba \in X\\ & = & CN_A(f^{-1}(y))\ \text{for}\ f(b) = y\\ & = & CN_{f(A)}(y). \end{eqnarray*} Hence, \(f(A)\) is a normal sub-intuitionistic fuzzy multigroup of \(f(B)\).

Proposition 13. Let \(Y\) be a group and \(A \in IFMG(Y)\). If \(f\) is an isomorphism of \(X\) onto \(Y\) and \(B\) is a normal sub-intuitionistic fuzzy multigroup of \(A\), then \(f^{-1}(B)\) is a normal sub-intuitionistic fuzzy multigroup of \(f^{-1}(A)\).

Proof. By Proposition 9, \(f^{-1}(A), f^{-1}(B) \in IFMG(X)\). Since \(B\) is an intuitionistic fuzzy submultigroup of \(A\), so \(f^{-1}(B)\subseteq f^{-1}(A)\). Let \(a, b \in X\), then we have \begin{eqnarray*} CM_{f^{-1}(B)}(aba^{-1})& = & CM_B(f(aba^{-1})) = CM_B(f(a)f(b)(f(a))^{-1})\\ & = & CM_B(f(a)(f(a))^{-1}f(b))\geq CM_B(e) \wedge CM_B(f(b))\\ & = & CM_{f^{-1}(B)}(b), \end{eqnarray*} \(\implies CM_{f^{-1}(B)}(aba^{-1}) \geq CM_{f^{-1}(B)}(b)\).

Similarly,

\begin{eqnarray*} CN_{f^{-1}(B)}(aba^{-1})& = & CN_B(f(aba^{-1})) = CN_B(f(a)f(b)(f(a))^{-1})\\ & = & CN_B(f(a)(f(a))^{-1}f(b))\leq CM_B(e) \vee CN_B(f(b))\\ & = & CN_{f^{-1}(B)}(b), \end{eqnarray*} \(\implies CN_{f^{-1}(B)}(aba^{-1}) \leq CN_{f^{-1}(B)}(b)\). This completes the proof.

4. Conclusion

In this paper, we have introduced the concept of homomorphism in intuitionistic fuzzy multigroups context and investigated some homomorphic properties of intuitionistic fuzzy multigroups. It was established that the homomorphic image and homomorphic preimage of intuitionistic fuzzy multigroups are also intuitionistic fuzzy multigroups. More theoretic concepts in group theory could be instituted in intuitionistic fuzzy multigroup setting in future research.

Conflict of Interests

The author declares no conflict of interest.

References

  1. Blizard, W. D. (1993). Dedekind multisets and functions shells. Theoretical Computer Science, 110, 79-98. [Google Scholor]
  2. Blizard,W. D. (1989). Multiset theory. Notre Dame Journal of Formal Logic, 30(1), 36-66. [Google Scholor]
  3. Jena,S. P. , Ghosh, S. K., & Tripathi, B. K. (2011). On theory of bags and lists. Information Science, 132, 241-254. [Google Scholor]
  4. Asyropoulos, A. (2001). Mathematics of Multisets. C. S. Calude et al. (Eds.). Multiset Processing, LNCS, 347-358. [Google Scholor]
  5. Asyropoulos, A. (2003). Categorical modeels of multisets. Romanain Journal of Information Science and Technology, 6(3-4),393-400. [Google Scholor]
  6. Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3),338-353. [Google Scholor]
  7. Anthony, J. M., & Sherwood, H. (1977). Fuzzy groups redefined. Journal of Mathematical Analysis and Application, 69, 124-130. [Google Scholor]
  8. BhakatS. K., & Das, P. (1992). On the definition of a fuzzy subgroup. Fuzzy Sets and Systems, 51, 235-241. [Google Scholor]
  9. Mordeson,J. N., Bhutani, K. R., & Rosenfeld, A.(2005). Fuzzy Group Theory. Springer.[Google Scholor]
  10. Rosenfeld, A.(1971). Fuzzy groups. Journal of Mathematical Analysis and Application, 35, 512-517.[Google Scholor]
  11. Atanassov, K. (1983). Intuitionistic Fuzzy sets, VII ITKRS Session, Bulgarian. Central Science-Technical Accademy of Science, 1697/84.[Google Scholor]
  12. Atannassov, K. T. (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems, 20, 87-96. [Google Scholor]
  13. Takeuti,G., & Titani, S. (1984). Intuitionistic fuzzy logic and Intuitionistic fuzzy set theory. Journal of Symbolic Logic, 49, 851-866. [Google Scholor]
  14. Biswas, R. (1989). Intuitionistic fuzzy subgroups. Mathematical Fortum, 10, 37-46.[Google Scholor]
  15. Sharma, P. K. (2011). \((\alpha, \beta)\)-cut of intuitionistic fuzzy groups. International Mathematics Forum, 6(53), 2605-2614. [Google Scholor]
  16. Umer, S., Hanan, A., Abdul, R., Saba, D., & Fatima, T. (2020). On some algebraic aspects of n-intuitionistic fuzzy subgroups. Journal of Taiba University for Science, 14(1), 463-469. [Google Scholor]
  17. Yager, R. R. (1986). On theory of bags. International Journal of General Systems, 13, 23-27. [Google Scholor]
  18. Shinoj, T. K., Baby, A., & Sunil, J. J. (2015). On some algebraic structures of fuzzy multisets. Annals of Fuzzy Mathematics and Informatics, 9(1),77-90. [Google Scholor]
  19. Ejegwa, P. A. (2018). Homomorphism of fuzzy multigroups and some of its properties. Applications and Applied Mathematics, vol. 13(1),114-129.[Google Scholor]
  20. Ejegwa, P. A. (2019). Direct product of fuzzy multigroups. Journal of New Theory, 28, 62-73. [Google Scholor]
  21. Ejegwa,P. A. (2020). On alpha-cuts homomorphism of fuzzy multigroups. Annals of Fuzzy Mathematics and Informatics, 19(1), 73-87. [Google Scholor]
  22. Ejegwa, P. A. (2020). Some properties of alpha-cuts of fuzzy multigroups. Journal of Fuzzy Mathematics, 28(1), 201-222. [Google Scholor]
  23. Ejegwa, P. A. (2020). Some group's theoretic notions in fuzzy multigroup context, In Handbook of Research on Emerging Applications of Fuzzy Algebraic Structures, pp. 34-62. IGI Global Publisher, Hershey, Pennsylvania 17033-1240, USA. [Google Scholor]
  24. Ejegwa, P. A., & Agbetayo, J. M. (2020). On commutators of fuzzy multigroups. Earthline Journal of Mathematical Sciences, 4(2), 189-210.[Google Scholor]
  25. Ejegwa, P. A., Agbetayo, J. M., & Otuwe, J. A. (2020). Characteristic and Frattini fuzzy submultigroups of fuzzy multigroups. Annals of Fuzzy Mathematics and Informatics, 19(2),139-155.[Google Scholor]
  26. Shinoj, T. K., & Sunil, J. J. (2013). Intuitionistic fuzzy multisets. International Journal Engineering Science and Innovative Technology, 2(6), 1-24.[Google Scholor]
  27. Ejegwa, P. A. (2014). On difference and symmetric difference operations on intuitionistic fuzzy multisets. Journal of Global Research in Mathematical Archives, 2(10),16-21. [Google Scholor]
  28. Ejegwa, P. A. (2015). New operations on intuitionistic fuzzy multisets. Journal of Mathematics and Informatics, 3, 17-23. [Google Scholor]
  29. Ejegwa, P. A. (2015). Mathematical techniques to transform intuitionistic fuzzy multisets to fuzzy sets. Journal of Information and Computing Science, 10(2), 169-172. [Google Scholor]
  30. Ejegwa, P. A. (2016). Some operations on intuitionistic fuzzy multisets. Journal of Fuzzy Mathematics, 24(4),761-768. [Google Scholor]
  31. Ejegwa, P. A. (2016). On intuitionistic fuzzy multisets theory and its application in diagnostic medicine. MAYFEB Journal of Mathematics, 4, 13-22. [Google Scholor]
  32. Ejegwa, P. A., & Awolola, J. A.(2013). Some algebraic structures of intuitionistic fuzzy multisets. International Journal of Science and Technology, 2(5), 373-376. [Google Scholor]
  33. Ejegwa, P. A., Kwarkar, L. N., & Ihuoma, K. N.(2016). Application of intuitionistic fuzzy multisets in appointment process. International Journal of Computer Applications, 135(1),1-4. [Google Scholor]
  34. Ibrahim, A. M., & Ejegwa, P. A. (2013). Some modal operators on intuitionistic fuzzy multisets. International Journal of Scientific and Engineering Research, 4(9), 1814-1822. [Google Scholor]
  35. Shinoj, T. K., & John, S. J. (2015). Intuitionistic fuzzy multigroups. Annals of Pure and Applied Mathematics, 9(1), 131-143. [Google Scholor]
  36. Adamu, I. M., Tella, Y., & Alkali, A. J. (2019). On normal sub-intuitionistic fuzzy Multigroups. Annals of Pure and Applied Mathematics, 19(2), 127-139.[Google Scholor]
]]>
Stability of stochastic 2D Navier-Stokes equations with memory and Poisson jumps https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/stability-of-stochastic-2d-navier-stokes-equations-with-memory-and-poisson-jumps/ Mon, 30 Nov 2020 15:17:24 +0000 https://old.pisrt.org/?p=4731
OMS-Vol. 4 (2020), Issue 1, pp. 417 - 429 Open Access Full-Text PDF
Diem Dang Huan
Abstract: The objective of this paper is to study the stability of the weak solutions of stochastic 2D Navier-Stokes equations with memory and Poisson jumps. The asymptotic stability of the stochastic Navier-Stoke equation as a semilinear stochastic evolution equation in Hilbert spaces is obtained in both mean square and almost sure senses. Our results can extend and improve some existing ones.
]]>

Open Journal of Mathematical Sciences

Stability of stochastic 2D Navier-Stokes equations with memory and Poisson jumps

Diem Dang Huan
Than Nhan Trung High School, Bacgiang Agriculture and Forestry University, Bacgiang 21000, Vietnam.; diemdanghuan@gmail.com

Abstract

The objective of this paper is to study the stability of the weak solutions of stochastic 2D Navier-Stokes equations with memory and Poisson jumps. The asymptotic stability of the stochastic Navier-Stoke equation as a semilinear stochastic evolution equation in Hilbert spaces is obtained in both mean square and almost sure senses. Our results can extend and improve some existing ones.

Keywords:

Stochastic Navier-Stokes equations, delays, Lévy noise.

1. Introduction

In this paper, we will investigate the stability of the weak solutions of stochastic 2D Navier-Stokes equations with memory and Poisson jumps of the form:

\begin{equation} \label{eq1.1} \begin{cases} dX=\big[\nu \triangle X-\langle X,\nabla \rangle X-\nabla p+f(x)+g(X(t-\rho(t)))\big]dt +\sigma(t,X(t-\delta(t)))dW(t) \\ \quad\quad\quad+\int_{Z}k(t,X(t-\gamma(t)),z)\widetilde{\eta}(dt,dz), \\ \text{div \(X=0\)} \quad in \quad (0,+\infty)\times D,\quad X(t,x)=0 \quad \text{on} \quad(0,+\infty)\times \partial D,\quad \\ X(0,x)=X_{0}(x), \quad X(t,x)=\phi(t,x), \text{ for \(x\in D\), and \(t\in [-r,0]\), with \(r>0\)}, \end{cases} \end{equation}
(1)
where \(X\) is the velocity field of the fluid, \(\nu>0\) is the kinematic viscosity, \(p\) is the associated hydrostatic pressure, \(f\) is a non-delayed external force field, \(g(X(t-\rho(t)))\) is another external force field with delay, \(\sigma(t,X(t-\delta(t)))dW(t)+\int_{Z}k(t,X(t-\gamma(t)),z)\widetilde{\eta}(dt,dz)\) is a random external force field with delays, where \(W\) is an infinite dimensional Wiener process and \(\widetilde{\eta}\) is a compensated time homogeneous Poisson random measure, \(\rho,\delta,\gamma: \mathbb{R}^+\rightarrow[0,r]\) are continuous functions, \(D\) is a regular open bounded domain of \(\mathbb{R}^2\) with boundary \(\partial D\), \(X_0\) is the initial velocity, \(\phi\) is the initial datum in the interval \([-r,0]\).

Navier-Stokes equations are the fundamental model of the fluid mechanics and turbulence. These equations have been the object of numerous works (see, for instance [1,2] and the references therein), even with unbounded domains (see [2,3]) since the first paper of Leray was published in 1933 [4]. Beside, noise or stochastic perturbation is unavoidable and omnipresent in nature as well as in man-made systems. Therefore, it is of great significance to import the stochastic effects into the investigation of Navier-Stokes equations. To our best knowledge, the theory of stochastic Navier-Stokes equations apparently has its roots in the 1959 edition of the Landau and Lifshitz [5] and the first work on the stochastic Navier-Stokes equations written from the mathematical point of view is the paper [6]. We mention here the works ([7,8,9,10] and the references cited therein) concerning the two-dimensional (2D in short) stochastic Navier-Stokes equation.

Partial differential equations with memory (e.g. delay) have attracted great interest due to their applications in describing many sophisticated dynamical systems in physics, chemistry, biology, economics and social sciences. On this matter, we refer the reader to [11,12,13,14,15,16] and the references therein. In the past few years, many papers have studied the Navier-Stokes equations with a forcing term which contains some hereditary characteristics (see, for instance [17,18,19] and the references therein). In addition, the long-time behavior and exponential stability of the Navier-Stokes equations is an interesting and challenging problem, since it can provide useful information on the future evolution of the system (see, [20,21]). On the exponential behavior for stochastic 2D Navier-Stokes equations with variable delay, we refer the reader to [22,23,24], and recently, in [25] and [26], the authors investigated the asymptotic behavior of solutions of stochastic evolution equations for second grade fluids and non-Newtonian fluids, respectively.

On the other hand, the world is more complicated and models which are allowed to have jumps - both big and small - are desirable. Hence, it is necessary and important to study stochastic systems with Poisson jumps or Lévy processes (see, for instance [27,28,29,30,31] and the references therein). In recent years, the stochastic 2D(3D) Navier-Stokes equations with Lévy noise has attracted much attention of researchers. To be more precise, in [32], Motyl considered the existence of solutions, in a probabilistic sense, to the stochastically driven, viscous, incompressible Navier-Stokes equations driven by the Lévy noise consisting of the compensated time homogeneous Poisson random measure and the Wiener process in two and three spatial dimensions, Dong et al., [33] discussed the existence of stationary weak solutions of stochastic 3D Navier-Stokes equations involving jumps and compared the Galerkin stationary probability measures for the case driven by Lévy noise and the one driven by Wiener process, and in [34], by using an abstract setting, Brzezniak et al. studied the existence and uniqueness of the solution of an abstract nonlinear equation driven by a multiplicative noise of Lévy type. More specifically, just recently, Taniguchi [35] obtained the existence and exponential stability of energy solutions to 2D stochastic functional Navier-Stokes equation perturbed by the Lévy process. However, to prove the exponential stability results in [35], the author have imposed that the delay function \((\rho(t))\) appears only in the non-random external force and furthermore this delay function have required a strong assumption, i.e. \(\rho(t)\) is differentiable and satisfies \(0\leq \rho'(t)< M_{\ast}< 1\), where \(M_{\ast}>0\). Thus, we will make the first attempt to establish some results for more general forcing terms and relax this restriction. Under some suitable assumptions, by using the Itô formula for jumps and Burkholder-Davis-Gundy inequalities in stochastic analysis, we establish the weak solution to (1) converges to the stationary solution of its stationary versions exponentially stable in the mean square. Further, by establishing a lemma for compensated Poisson random measures (Lemma 3), we give the result that the weak solution to (1) converges to the stationary solution of its stationary version almost surely exponentially. Assumptions given in our article do not require the monotone decreasing behaviour of the delays and satisfying \(0\leq \rho'(t),\delta'(t),\gamma'(t)< 1\). Therefore, the current paper can be regarded as the extension of the work of Caraballo and Real [20] to the stochastic settings, simultaneously extend and improve the one of Taniguchi [35] (where the random external force field does not include delays and the external force field contains delay but the memory function \(\rho(t)\) is a differentiable function) as well as the asymptotic behavior results published in [7] (where the external force fields does not contain delays and the stochastic Navier-Stokes equation without non-Gaussian Lévy noise perturbation) and the papers announced by Chen [22], Wan and Zhou [23] (where the random external force field does not contain discontinuous multiplicative noise).

The remainder of this paper is organized as follows: In Section 2, we briefly present some basic notations, preliminaries. The main results in Section 3 are devoted to studying the asymptotic behavior for the weak solutions of the system (1) with their proofs.

2. Preliminaries

In this section, we introduce notations and preliminary results need to establish our results. For more details on this section, we refer the reader to [2,8,36,37,38].

We first introduce the following function spaces, which are usual in the study of Navier-Stokes equations:

\[\mathcal{V}:=\{u\in C_{0}^{\infty}(D,\mathbb{R}^2):\text{div \(u =0\)}\}.\] \(\mathbb H:=\) the closure of \(\mathcal{V}\) in \(L^2(D,\mathbb{R}^2)\) with the norm \(|u|=(u,u)^{\frac{1}{2}}\), where for \(u,v\in L^2(D,\mathbb{R}^2),\) \[(u,v)=\sum_{j=1}^{2}\int_{D}u_j(x)v_j(x)dx.\] \(\mathbb V:=\) the closure of \(\mathcal{V}\) in \(\mathbb H_{0}^{1}(D,\mathbb{R}^2)\) with the norm \(\|u\|=((u,u))^{\frac{1}{2}}\), where for \(u,v\in \mathbb H_{0}^{1}(D,\mathbb{R}^2),\) \[((u,v))=\sum_{i,j=1}^{2}\int_{D}\frac{\partial u_i}{\partial x_j}\frac{\partial v_i}{\partial x_j}dx.\] It follows that \(\mathbb H\) and \(\mathbb V\) are separable Hilbert spaces with associated inner products \((\cdot,\cdot)\) and \(((\cdot,\cdot))\) and the following is satisfied \[\mathbb V\subset \mathbb H\equiv \mathbb H^\ast\subset \mathbb V^\ast,\] where injections are dense, continuous, and compact; \(\mathbb H^\ast\) and \(\mathbb V^\ast\) stand for the topological dual of \(\mathbb H\) and \(\mathbb V\), respectively.

Now, let \(P_{ \mathbb H}\) be an orthogonal projector from \(L^2(D,\mathbb{R}^2)\) onto \(\mathbb H\). We define the operator \(A:L^2(D,\mathbb{R}^2)\rightarrow \mathbb H\) by \(Au=-P_{ \mathbb H}\triangle u\) and \(B:\mathbb V\times \mathbb V\rightarrow\mathbb V^\ast\) by \(\langle B(u,v),w\rangle=b(u,v,w)\), \(\forall u,v,w\in \mathbb V,\) where \(\langle\cdot,\cdot\rangle\) denotes the duality \(\langle\mathbb V^\ast,\mathbb V\rangle\) and \(b\) is the trilinear form defined by

\[b(u,v,w)=\sum_{i,j=1}^{2}\int_{D}u^j(x)\frac{\partial v^j}{\partial x_i}w^j(x)dx.\] We also set \(B(u)=B(u,u)\), \(\forall u\in \mathbb V.\)

Furthermore, we shall need some properties of the trilinear form \(b\), and we list below the ones that we will used later on (see, [38]),

\begin{align*} &|b(u,v,w)|\leq C_1|u|^{\frac{1}{2}}\|u\|^{\frac{1}{2}}\|v\||w|^{\frac{1}{2}}\|w\|^{\frac{1}{2}}, \quad \forall u,v,w\in \mathbb V,\&b(u,v,v)=0,\quad \forall u,v\in \mathbb V,\\ &b(u,u,v-u)-b(v,v,v-u)=-b(v-u,u,v-u),\quad \forall u,v\in \mathbb V, \end{align*} where \(C_1>0\) is an appropriate constant which depends on the regular open domain \(D\) (see [39]).

Let \((\Omega,\mathcal{F},\mathbf{P})\) be a complete probability space equipped with some filtration \((\mathcal F_{t})_{t\geqslant 0}\) satisfying the usual conditions (i.e., it is right continuous and \(\mathcal F_{0}\) contains all \(\mathbf{P}\)-null sets).

With the symbol \(\{W(t)\}_{t\geq 0}\), we denote a \(\mathbb{K}\)-valued \((\mathcal F_{t})_{t\geqslant 0}\)-Wiener process defined on the probability space \((\Omega,\mathcal{F},\mathbf{P})\) with covariance operator \(Q\), i.e.

\[\mathbf{E}\langle W(t),a\rangle_{\mathbb{K}}\langle W(s),b\rangle_{\mathbb{K}}=\min\{t,s\} \langle Qa,b\rangle_{\mathbb{K}},\quad \forall a, b\in\mathbb{K}, \] where \(Q\) is a positive, self-adjoint, trace class operator on \(\mathbb{K}\). In particular, we call such \(\{W(t)\}_{t\geq 0}\) a \(\mathbb{K}\)-valued \(Q\)-Wiener process relative to \((\mathcal F_{t})_{t\geqslant 0}\). We assume that there exist a complete orthonormal system \(\{e_n\}_{n\in \mathbb{N}}\) in \(\mathbb{K}\), a bounded sequence of nonnegative real number \(\{\lambda_n\}_{n\in \mathbb{N}}\) such that \(Qe_{n}=\lambda_{n}e_n,\quad n=1,2,3, ...\), and a sequence \(\{\beta_{n}\}_{n\geq 1}\) of independent standard Brownian motions such that \[\langle W(t),e\rangle_{\mathbb{K}}=\Big\langle\sum_{n=1}^{\infty}\sqrt{\lambda_{n}} e_n\beta_{n},e\Big\rangle_ {\mathbb{K}},\quad t\geq 0,\quad e\in \mathbb{K},\] and let \(\mathcal F_{t}=\sigma\{W(s): 0\leq s\leq t\}\) be the \(\sigma\)-algebra generated by \(W\).

In order to define stochastic integrals with respect to the \(Q\)-Wiener process \(W(t)\), we introduce the subspace \(\mathbb K_{0}=Q^{\frac{1}{2}}\mathbb{K}\) of \(\mathbb{K}\), which endowed with the inner product,

\[\langle a,b\rangle_{\mathbb K_{0}}=\langle Q^{-\frac{1}{2}}a,Q^{-\frac{1}{2}}b\rangle_{\mathbb{K}},\quad \forall a, b\in\mathbb{K}_{0}\] is a Hilbert space. Let \({\mathcal L_{2}^{0}}=\mathcal L_{2}(\mathbb K_{0};\mathbb{H})\) denote the space of all Hilbert-Schmidt operators from \(\mathbb K_0\) into \(\mathbb{H}\). It turns out to be a separable Hilbert space, equipped with the norm \[\|\psi\|_{{\mathcal L_{2}^{0}}}^{2}=tr((\psi Q^{\frac{1}{2}})(\psi Q^{\frac{1}{2}})^{\ast}),\] for any \(\psi \in {\mathcal L_{2}^{0}}\). Obviously, for any bounded operators \(\psi \in\mathcal L(\mathbb K;\mathbb{H})\) - the set of all linear bounded operators from \(\mathbb{K}\) into \(\mathbb{H}\), this norm reduces to \[\|\psi\|_{{\mathcal L_{2}^{0}}}^{2}=tr(\psi Q\psi^{\ast})=\sum_{n=1}^{\infty}|\sqrt{\lambda_{n}}\psi e_{n}|^{2}.\] Let \(\Phi : (0,\infty)\rightarrow{\mathcal L_{2}^{0}}\) be a predictable, \(\mathcal F_t\)-adapted process such that \[\int_{0}^{t}\mathbf{E}\|\Phi (s)\|_{\mathcal L_{2}^{0}}^{2}ds< \infty,\quad t>0.\] Then, we can define the \(\mathbb{H}\)-valued stochastic integral \(\int_{0}^{t}\Phi(s)dW(s)\) (which is a continuous square-integrable martingale) of \(\Phi\) with respect to the \(\mathcal F_{t}\)-valued \(Q\)-Wiener process \(W(t)\) by \[\Big\langle\int_{0}^{t}\Phi(s)dW(s),e\Big\rangle:=\sum_{n=1}^{\infty}\int_{0}^{t}\Big\langle\Phi(s)\sqrt{\lambda_{n}} e_n,e\Big\rangle d\beta_{n}(s)\] for any \(e\in \mathbb H\) using the Itô integral with respect to \(\beta_{n}(s)\). For the construction, we can refer to Da Prato and Zabczyk [8].

Let \(L=(L_t)_{t\geq 0}\) is a \(\mathbb K\)-valued Lévy processes such that \(L\) has stationary and independent increments, is stochastically continuous and satisfies \(L_0=0\) almost surely. Let \(p(t)\), \(t\geq 0\) be the law of \(L_t\), then \((p(t))_{t\geq 0}\) is a weakly continuous convolution semigroup of probability measures on \(\mathbb K\). We have the Lévy-Khinchin formula [36] which yields for all \(t\geq 0\), \(x\in \mathbb K\),

\[\mathbf E\Big(e^{i\langle x,L_t\rangle_{\mathbb K}}\Big)=e^{t\zeta(x)},\] where \begin{align*} \zeta(x):=\exp\Big\{i\langle a,x\rangle_{\mathbb K}-\frac{1}{2}\langle Qx,x\rangle_{\mathbb K} +\int_{\mathbb K-\{0\}}\big[e^{i\langle x,y\rangle_{\mathbb K}}-1-\chi_{\{|y|_{\mathbb K}< 1\}}(y)i\langle x,y\rangle_{\mathbb K}\big]\lambda(dy)\Big\}, \end{align*} where \(a\in \mathbb K\) and \(\lambda\) is a Lévy measure or a jump intensity measure of \(L\) on \(\mathbb K-\{0\}\), i.e., \(\int_{\mathbb K-\{0\}}\min(|y|_{\mathbb K}^{2},1)\lambda(dy)< \infty\); \(\chi_Z\) denotes the characteristic function on set \(Z\subset \mathbb K\); the triple \((a,Q,\lambda)\) is the characteristics of \(L\) and the mapping \(\zeta\) is the characteristic exponent of \(L\). We can also define the Lévy measure on the whole of \(\mathbb K\) via the assignment \(\lambda(\{0\})=0\).

Now, we shall write \(\triangle L_t:=L_t-L_{t-}\), \(\forall t\geq 0,\) where \(L_{t-}:=\lim_{s\uparrow t}L_s.\) Then, almost surely for any \(Z\in\mathcal{B}(\mathbb K-\{0\})\), which denotes the Borel \(\sigma\)-field of \((\mathbb{K}-\{0\})\) and with \(0\notin\) the closure of \(Z\), we get a counting Poisson random measure \(\eta\) on \((\mathbb{K}-\{0\})\):

\[\eta(t,Z)=\# \{0\leq s\leq t,\triangle L_s\in Z\}< \infty,\quad t\geq 0.\] Let \[\widetilde{\eta}(t,dy):=\eta(t,dy)-\lambda(dy)t\] be the compensated Poisson measure that is independent of \(W(t).\)

Let \(\lambda_Z\) denotes the restriction of the measure \(\lambda\) to \(Z\), still denoted by \(\lambda\), such that \(\lambda\) is finite on \(Z\). Denote by \(\mathcal{ P}^{2}([0,T]\times Z;\mathbb{H})\) the space of all predictable mappings \(k:[0,T]\times Z\rightarrow\mathbb{H}\) for which

\[\int_{0}^{T}\int_{Z}\mathbf{E}|k(t,y)|_{\mathbb{H}}^{2}\lambda(dy)dt< \infty.\] We may then define the \(\mathbb{H}\)-valued stochastic integral \[\int_{0}^{T}\int_{Z}k(t,y){\eta}(dt,dy):=\sum_{0\leq t\leq T}k(t,\triangle Y_t)\chi_{Z}(\triangle Y_t),\] where \[Y_t:=\int_{Z}y\eta(t,dy)=\sum_{0\leq t\leq t}\triangle L_s\chi_{Z}(\triangle L_t)\] as a random finite sum which enables us to define \[\int_{0}^{T}\int_{Z}k(t,y)\widetilde{\eta}(dt,dy):=\int_{0}^{T}\int_{Z}k(t,y){\eta}(dt,dy)-\int_{0}^{T}\int_{Z}k(t,y))\lambda(dy)dt.\] Furthermore, we can see that \(\int_{0}^{t}\int_{Z}k(s,y)\widetilde{\eta}(ds,dy)\) is an \(\mathbb H\)-valued centered square-integrable martingale such that \[\mathbf E \Big(\Big|\int_{0}^{T}\int_{Z}k(t,y)\widetilde{\eta}(dt,dy)\Big|_{\mathbb H}^{2}\Big) =\int_{0}^{T}\int_{Z}\mathbf{E}|k(t,y)|_{\mathbb{H}}^{2}\lambda(dy)dt.\] We can refer to Protter [37] for a systematic theory about stochastic integrals of this kind.

Thus, the stochastic 2D Navier-Stokes equations with with memory and Poisson jumps (1) can be rewritten as follows in the abstract mathematical setting:

\begin{equation} \label{eq2.1} \begin{cases} dX(t)=\big[-\nu AX(t)-B(X(t))+f(x)+g(X(t-\rho(t)))\big]dt +\sigma(t,X(t-\delta(t)))dW(t) \\ \quad\quad\quad\;\;\;+\int_{Z}k(t,X(t-\gamma(t)),z)\widetilde{\eta}(dt,dz)\quad t\geq 0, \\ X(\theta)=\phi(\theta)\in L^{2}(\Omega,\mathcal{C}([-r,0],\mathbb H)),\quad \theta\in [-r,0],\quad r>0, \end{cases} \end{equation}
(2)
where \(f\in \mathbb V^\ast\), \(g:\mathbb V\rightarrow\mathbb V^\ast\), \(\sigma:[0,+\infty)\times \mathbb V\rightarrow{\mathcal L_{2}^{0}}(\mathbb K,\mathbb H)\), and \(k:[0,+\infty)\times\mathbb H\times(\mathbb{K}-\{0\})\rightarrow\mathbb H\) are progressively measurable; \(L^{2}(\Omega,\mathcal{C}([-r,0],\mathbb H))\) denotes the family of all almost surely bounded \((\mathcal F_{t})_{t\geqslant 0}\)-measurable and \(\mathcal{C}([-r,0],\mathbb H)\)-valued stochastic process, here \(\mathcal{C}([-r,0],\mathbb H)\) denote the family of all right-continuous functions with left-hand limits \(\phi\) from \([-r,0]\) to \(\mathbb H\) which is equipped with the norm \(\|\phi\|_{\mathcal{C}}:=\sup_{-r\leq \theta\leq 0}|\phi(\theta)|\).

Now, we give the definition of the weak solution of system (2).

Definition 1. An \(\mathcal F_{t}\)-adapted process \(X(t)\) is called the weak solution to (2) if the following conditions are satisfied:

  • (i) \(X(t)\in \mathcal{C}(-r,T;\mathbb H)\cap L^{2}(-r,T;\mathbb V)\), a.s., \(\forall T>0\);
  • (ii) the following integral equation holds as an identity in \(\mathbb V^\ast\) a.s., \(\forall t\in [0,T]\),
    \begin{align} \label{eq2.2} X(t)=&X(0)+\int_{0}^{t}\big[-\nu AX(s)-B(X(s))+f(s)+g(X(s-\rho(s)))\big]ds\nonumber\\ &\;\;\;+\int_{0}^{t}\sigma(s,X(s-\delta(s)))dW(s)+\int_{0}^{t}\int_{Z}k(s,X(s-\gamma(s)),z)\widetilde{\eta}(ds,dz). \end{align}
    (3)
For our purpose, we recall the Itô formula, which will play a key role in what follows. Let \(C^{2}(\mathbb H;\mathbb{R}^+)\) denote the space of all real-valued nonnegative functions \(\Upsilon\) on \(\mathbb H\) with the following properties:
  • (a) \(\Upsilon(x)\) is twice (Fréchet) differentiable in \(x\);
  • (b) Both \(\Upsilon_{x}(x)\) and \(\Upsilon_{xx}(x)\) are continuous in \(\mathbb H\) and \(\mathcal{L}(\mathbb H)\).

Lemma 1.[40] Suppose \(\Upsilon\in C^{2}(\mathbb H;\mathbb{R}^+)\) and \(X(t)\), \(t\geq 0\), is a weak solution to (2). Then \begin{align*} \Upsilon(X(t))=&\Upsilon(X(0))+\int_{0}^{t}\mathcal{L}\Upsilon(X(s))ds+\int_{0}^{t}\langle\Upsilon_{x}(X(s)),\sigma(s,X(s-\delta(s)))dW(s)\rangle \nonumber\\ &\;\;\;+\int_{0}^{t}\int_{Z}[\Upsilon(X(s)+k(s,X(s-\gamma(s)),z))-\Upsilon(X(s))]\widetilde{\eta}(ds,dz), \end{align*} where \(\mathcal{L}\) is the associated diffusion operator defined, for any \(x\in \mathbb V\), by \begin{align*} \mathcal{L}\Upsilon(x(t))= &\langle-\nu Ax(t)-B(x(t))+f(t)+g(x(t-\rho(t))),\Upsilon_{x}(x(t))\rangle\\ &\;\;\;+\frac{1}{2}trace\Big(\Upsilon_{xx}(x(t))\sigma(t,x(t-\delta(t)))Q\sigma(t,x(t-\delta(t)))^{\ast}\Big)\\ &\;\;\;+\int_{Z}[\Upsilon(x(t)+k(s,x(s-\gamma(s)),z))-\Upsilon(x(t))-\langle\Upsilon_{x}(x(t)),k(s,x(s-\gamma(s)),z)\rangle]\lambda(dz). \end{align*}

Definition 2. We say that a weak solution \(X(t)\) of system (2) converges to \(u_\infty\in\mathbb H\) exponentially in the mean square if there exist \(c>0\) and \(M_0>0\) such that for all \(t\geq 0\) \[\mathbf E|X(t)-u_\infty|^2\leq M_0e^{-ct}.\] In particular, if \(u_\infty\) is a solution to (2), then it is said that \(u_\infty\) is exponentially stable in the mean square provided that every weak solution to (2) converges to \(u_\infty\) exponentially in the mean square with the same exponential order \(c>0\).

Definition 3. We say that a weak solution \(X(t)\) of system (2) converges to \(u_\infty\in\mathbb H\) almost surely exponentially if there exist \(\gamma>0\) such that \[\lim_{t\to\infty}\frac{\log |X(t)-u_\infty|}{t}\leq -\gamma,\quad a.s..\] In particular, if \(u_\infty\) is a solution to (2), then it is said that \(u_\infty\) is almost surely exponentially stable provided that every weak solution to (2) converges to \(u_\infty\) almost surely exponentially with the same constant \(\gamma\).

3. Main results

In this section, we will discuss the asymptotic behavior for the weak solutions of stochastic 2D Navier-Stokes equations with finite memory and Poisson jumps.

Let \(\lambda_1>0\) be the first eigenvalue of \(A\), then

\[\|v\|^2\geq \lambda_1|v|^2,\quad\forall v\in V.\] To investigate the asymptotic behavior for the weak solutions of (2), we assume the following hypotheses:
  • (\(\mathbf H1\)) \(g(0)=0\) and there exists a positive number \(C_g\) such that \[\|g(u)-g(v)\|_{\mathbb V^\ast}\leq C_g|u-v|,\quad \forall u,v\in \mathbb H.\]
  • (\(\mathbf H2\)) There exist integrable functions \(\alpha_1,\gamma_1:[0,\infty)\rightarrow \mathbb{R}^+\) such that, for certain constant \(\beta_1\geq 0\) and \(u\in \mathbb H\), \[\|\sigma(t,u)\|_{\mathcal{L}_{2}^{0}}^{2}\leq \alpha_1(t)+(\beta_1+\gamma_1(t))|u-u_\infty|^2.\]
  • (\(\mathbf H3\)) There exist integrable functions \(\alpha_2,\gamma_2:[0,\infty)\rightarrow \mathbb{R}^+\) such that, for certain constant \(\beta_2\geq 0\) and \(u\in \mathbb H\), \[\int_{Z}|k(t,u,z)|^2\lambda(dz)\leq \alpha_2(t)+(\beta_2+\gamma_2(t))|u-u_\infty|^2.\]
  • (\(\mathbf H4\)) There exists \(\theta>0\) such that, for \(t\geq 0\), \[\int_{0}^{\infty}e^{\theta s}\alpha_i(t)dt< \infty,\quad \int_{0}^{\infty}e^{\theta s}\gamma_i(t)dt< \infty,\quad i=1,2.\]
We first consider the existence of the stationary solution to the equation
\begin{equation} \label{eq3.1} \begin{aligned} \nu AX+BX=f(x)+g(X)\quad (\text{in \(\mathbb V^\ast\)}). \end{aligned} \end{equation}
(4)
We have the following lemma:

Lemma 2.[20] Suppose that \(g\) satisfies the condition \((\mathbf {H1})\) and \(\nu>\lambda_{1}^{-1}C_g\). Then we have the following:

  • (i) For all \(f\in \mathbb V^\ast\), there exists a stationary solution \(u_\infty\) to (4).
  • (ii) There exists a constant \(C(D)>0\) such that, if \((\nu-\lambda_{1}^{-1}C_g)^2>C(D)\|f\|_{\mathbb V^\ast}\), then the stationary solution to (4) is unique.

Now, using above lemma, we will discuss the asymptotic behavior for the weak solutions of (2). Hence, throughout this paper we assume that there exists a unique stationary solution \(u_\infty\in \mathbb V\) to (4).

Set \(y(t):=X(t)-u_\infty\) and \(\Upsilon(y(t))=y^{2}(t)\). Then the function \(y(t)\) satisfies the following equation: \begin{align*} d(X(t)-u_\infty)&=[-\nu A(X(t)-u_\infty))-(B(X(t)-B(u_\infty))+(g(X(t-\rho(t)))-g(u_\infty))]dt\\ &\quad+\sigma(t,X(t-\delta(t)))dW(t) +\int_{Z}k(t,X(t-\gamma(t)),v)\widetilde{\eta}(dt,dv). \end{align*} Similar to the articles of Caraballo et al., [7], Liu [41] and Wan [42], in which, they studied Gaussian white noise by Itô formula, utilizing Itô formula for L\(\acute{e}\)vy noise in Lemma 1 to the function \(\Upsilon(y(t))=y^{2}(t)\) and taking expectation, we easily obtain the following result.

Theorem 1. Suppose that the conditions \((\mathbf {H1})-(\mathbf {H4})\) hold. Then there exists a unique weak solution \(X(t)-u_\infty\in \mathcal{C}(-r,T;\mathbb H)\cap L^{2}(-r,T;\mathbb V)\) a.s.. Furthermore, the following identity holds: \begin{align*} \frac{d}{dt}\mathbf E|X(t)-u_\infty|^2 &=-2\mathbf E\langle\nu A(X(t)-u_\infty),X(t)-u_\infty\rangle -2\mathbf E\langle B(X(t)-B(u_\infty),X(t)-u_\infty\rangle\\ &\;\;\;+2\mathbf E\langle g(X(t-\rho(t)))-g(u_\infty),X(t)-u_\infty\rangle+\mathbf E\|\sigma(t,X(t-\delta(t)))\|_{\mathcal{L}_{2}^{0}}^{2}\\ &\;\;\;+\mathbf E\int_{Z}|k(t,X(t-\gamma(t)),z)|^{2}\lambda(dz). \end{align*} The first main result of this section is the following theorem.

Theorem 2. Suppose that the conditions \((\mathbf {H1})-(\mathbf {H4})\) hold. Then the weak solution \(X(t)\) to (2) converges to the stationary solution \(u_{\infty}\) to (4) exponentially stable in the mean square provided that the following inequality

\begin{equation} \label{eq3.2} \begin{aligned} C(D)\sqrt{\lambda_1}\|u_\infty\|+C_g+\frac{\beta_1+\beta_2}{2}< \nu\lambda_1, \end{aligned} \end{equation}
(5)
holds.

Proof. From (5), there exists a positive constant \(\upsilon\) such that \begin{align*} C(D)\sqrt{\lambda_1}\|u_\infty\|+C_g+\frac{\beta_1+\beta_2}{2} < C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{\beta_1+\beta_2}{2}+\frac{1}{2\upsilon}+\frac{\upsilon C_{g}^{2}}{2}< \nu\lambda_1. \end{align*} Furthermore, there exists a constant \(c\in(0,\theta)\) sufficiently small such that

\begin{equation} \label{eq3.3} \begin{aligned} c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\beta_1+\beta_2+\frac{1}{2\upsilon}+\upsilon C_{g}^{2}e^{cr}-2\nu\lambda_1< 0. \end{aligned} \end{equation}
(6)
For convenience, we shall denote \[\alpha(t)=\alpha_1(t)+\alpha_2(t),\quad \eta(t)=(\gamma_1(t)+\gamma_2(t))e^{cr}.\] Since \(\alpha_i,\gamma_i\), \(i=1,2\), is integrable, together with the assumption \((\mathbf {H4})\), we deduce that
\begin{equation} \label{eq3.4} \begin{aligned} \Lambda_1=\int_{0}^{\infty}\eta(t)dt< \infty, \quad \Lambda_2=\int_{0}^{\infty}\alpha(t)dt\leq \Lambda_3=\int_{0}^{\infty}\alpha(t)e^{\theta t}dt< \infty. \end{aligned} \end{equation}
(7)
Define the function \begin{equation*} F(t):= \begin{cases} \mathbf E|X(t)-u_\infty|^2e^{ct}\exp\big(-\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\big),\quad t\geq 0,\\ \mathbf E|X(t)-u_\infty|^2e^{ct},\quad t\in[-r,0). \end{cases} \end{equation*} Clearly, \(F(t)\) is right-continuous functions with left-hand limits on \([-r,+\infty)\) and
\begin{align} \label{eq3.5} \frac{dF(t)}{dt}&=e^{ct}\exp\big(-\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\big)\Big\{\big[c-\eta(t)-\alpha(t)e^{ct}\big]\mathbf E|X(t)-u_\infty|^2+\frac{d}{dt}\mathbf E|X(t)-u_\infty|^2\Big\}. \end{align}
(8)
Now, from definition on the operator \(B\), we get
\begin{equation} \label{eq3.6} \begin{aligned} \langle B(X(s))-B(u_\infty),X(s)-u_\infty\rangle=b(X(s)-u_\infty,u_\infty,X(s)-u_\infty), \end{aligned} \end{equation}
(9)
and from the properties on trilinear form \(b\), we obtain
\begin{equation} \label{eq3.7} \begin{aligned} |b(X(s)-u_\infty,u_\infty,X(s)-u_\infty)| &\leq C_1|X(s)-u_\infty|^{\frac{1}{2}}\|X(s)-u_\infty\|^{\frac{1}{2}}\|u_\infty\| |X(s)-u_\infty|^{\frac{1}{2}}\|X(s)-u_\infty\|^{\frac{1}{2}}\\ &=C_1|X(s)-u_\infty|\|X(s)-u_\infty\|\|u_\infty\|\\ &\leq {C_1}\lambda_{1}^{-\frac{1}{2}}\|u_\infty\|\|X(s)-u_\infty\|^2. \end{aligned} \end{equation}
(10)
Furthermore, by Young's inequality and the assumption \((\mathbf {H1})\), we have
\begin{equation} \label{eq3.8} \begin{aligned} |\langle g(X(s-\rho(s)))-g(u_\infty),X(s)-u_\infty\rangle|\leq\upsilon C_{g}^2|X(s-\rho(s))-u_\infty|^2+\frac{1}{\upsilon}|X(s)-u_\infty|^2. \end{aligned} \end{equation}
(11)
Hence, \begin{align*} \frac{dF(t)}{dt} &\leq e^{ct}\exp\Big(-\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\Bigg\{\Big[ c-\eta(t)-\alpha(t)e^{ct}-2\nu\lambda_1 \\ &\quad+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}\Big]\mathbf E|X(t)-u_\infty|^2+\upsilon C_{g}^{2}\mathbf E|X(t-\rho(t))-u_\infty|^2\\ &\quad+\mathbf E\|\sigma(t,X(t-\delta(t)))\|_{\mathcal{L}_{2}^{0}}^{2}+\mathbf E\int_{Z}|k(t,X(t-\gamma(t)),z)|^{2}\lambda(dz)\Bigg\}\\ &\leq\Big(c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-\eta(t)-2\nu\lambda_1\Big)F(t)+\alpha(t)e^{ct}-\alpha(t)e^{ct}F(t)\\ &\quad+e^{ct}\exp\Big(-\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\upsilon C_{g}^{2}\mathbf E|X(t-\rho(t))-u_\infty|^2\\ &\quad+e^{ct}\exp\Big(-\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_1+\gamma_1(t)\big]\mathbf E|X(t-\delta(t))-u_\infty|^2 \\&\quad+e^{ct}\exp\Big(-\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_2+\gamma_2(t)\big]\mathbf E|X(t-\gamma(t))-u_\infty|^2. \end{align*} In what follows, we claim that for any \(t\geq 0\)
\begin{equation} \label{eq3.9} \begin{aligned} F(t)\leq \widetilde{M}:=1+\sup_{[-r,0]}\mathbf E|X(t)-u_\infty|^2. \end{aligned} \end{equation}
(12)
If the inequality (12) does not hold, then there exists \(t^{\ast}>0\) such that, for any \(\varepsilon>0\)
\begin{equation} \label{eq3.10} \begin{aligned} F(t)< \widetilde{M},\quad 0\leq t< t^{\ast}, \quad F(t^\ast)=\widetilde{M},\quad F(t)>\widetilde{M},\quad t^\ast\leq t\leq t^\ast+\varepsilon. \end{aligned} \end{equation}
(13)
This, in addition to (8), it can be shown that
\begin{equation} \label{eq3.11} \begin{aligned} \frac{dF(t^\ast)}{dt}\geq 0. \end{aligned} \end{equation}
(14)
Furthermore, \begin{align*} \frac{dF(t^{\ast})}{dt} &\leq\Big(c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-\eta(t^{\ast})-2\nu\lambda_1\Big)F(t^{\ast}) +\alpha(t^{\ast})e^{ct^{\ast}}-\alpha(t^{\ast})e^{ct^{\ast}}F(t^{\ast})\\ &\quad+e^{ct^{\ast}}\exp\Big(-\int_{0}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\upsilon C_{g}^{2}\mathbf E|X(t^{\ast}-\rho(t^{\ast}))-u_\infty|^2\\ &\quad+e^{ct^{\ast}}\exp\Big(-\int_{0}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_1+\gamma_1(t^{\ast})\big]\mathbf E|X(t^{\ast}-\delta(t^{\ast}))-u_\infty|^2 \\&\quad+e^{ct^{\ast}}\exp\Big(-\int_{0}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_2+\gamma_2(t^{\ast})\big]\mathbf E|X(t^{\ast}-\gamma(t^{\ast}))-u_\infty|^2. \end{align*} Next, we split the following cases to derive the desired assertion.
  • Case 1: If \(t^{\ast}-\rho(t^{\ast})\geq 0, t^{\ast}-\delta(t^{\ast})\geq 0, t^{\ast}-\gamma(t^{\ast})\geq 0\), we then have from (6) and (13) that \begin{align*} \frac{dF(t^{\ast})}{dt} &\leq\Big(c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-\eta(t^{\ast})-2\nu\lambda_1\Big)F(t^{\ast}) +\alpha(t^{\ast})e^{ct^{\ast}}-\alpha(t^{\ast})e^{ct^{\ast}}F(t^{\ast})\\ &\quad+e^{c\rho(t^{\ast})}\exp\Big(-\int_{t^\ast-\rho(t^{\ast})}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\upsilon C_{g}^{2} F\big(X(t^{\ast}-\rho(t^{\ast}))\big)\\ &\quad+e^{c\delta(t^{\ast})}\exp\Big(-\int_{t^\ast-\delta(t^{\ast})}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_1+\gamma_1(t^{\ast})\big] F\big(X(t^{\ast}-\delta(t^{\ast}))\big) \\&\quad+e^{c\gamma(t^{\ast})}\exp\Big(-\int_{t^\ast-\gamma(t^{\ast})}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_2+\gamma_2(t^{\ast})\big] F\big(X(t^{\ast}-\gamma(t^{\ast}))\big) \end{align*} \begin{align*} &\leq\Big(c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\beta_1+\beta_2+\frac{1}{\upsilon}+\upsilon C_{g}^{2}e^{cr}-2\nu\lambda_1\Big)\widetilde{M}< 0,\end{align*} which contradicts with (14). That is, the desired assertion (12) must hold.
  • Case 2: If \(t^{\ast}-\rho(t^{\ast})\leq 0, t^{\ast}-\delta(t^{\ast})\leq 0, t^{\ast}-\gamma(t^{\ast})\leq 0\), we then have from (6) and (13) that \begin{align*} \frac{dF(t^{\ast})}{dt} &\leq\Big(c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-\eta(t^{\ast})-2\nu\lambda_1\Big)F(t^{\ast}) +\alpha(t^{\ast})e^{ct^{\ast}}-\alpha(t^{\ast})e^{ct^{\ast}}F(t^{\ast})\\ &\quad+e^{c\rho(t^{\ast})}\exp\Big(-\int_{0}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\upsilon C_{g}^{2} F\big(X(t^{\ast}-\rho(t^{\ast}))\big)\\ &\quad+e^{c\delta(t^{\ast})}\exp\Big(-\int_{0}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_1+\gamma_1(t^{\ast})\big] F\big(X(t^{\ast}-\delta(t^{\ast}))\big) \\&\quad+e^{c\gamma(t^{\ast})}\exp\Big(-\int_{0}^{t^{\ast}}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\big[\beta_2+\gamma_2(t^{\ast})\big] F\big(X(t^{\ast}-\gamma(t^{\ast}))\big)\\ &\leq\Big(c+2C(D)\sqrt{\lambda_1}\|u_\infty\|+\beta_1+\beta_2+\frac{1}{\upsilon}+\upsilon C_{g}^{2}e^{cr}-2\nu\lambda_1\Big)\widetilde{M}< 0. \end{align*}
This is a contradiction. Hence, (12) holds true for any \(t\geq 0\). For other cases, for example: \(t^{\ast}-\rho(t^{\ast})\geq 0, t^{\ast}-\delta(t^{\ast})\leq 0, t^{\ast}-\gamma(t^{\ast})\leq 0\),..., in the same way as Cases 1 and 2 were done, we can show (12).

Therefore, from (12), we infer that

\[\mathbf E|X(t)-u_\infty|^2\leq \widetilde{M}e^{-ct}\exp\Big(\int_{0}^{t}[\eta(s)+\alpha(s)e^{cs}]ds\Big)\stackrel{(3.4)}{\leq}\widetilde{M}e^{\Lambda_1+\Lambda_3}e^{-ct},\quad \forall t\geq 0.\] This completes the proof of the theorem.

In the following, \(C\) will denote a generic constant whose values might change from line to line.

In order to prove the almost surely exponential stability of the weak solution of (2) we shall establish the following lemma.

Lemma 3. For any \(t\geq 0\), there exists a constant \(C>0\) such that \begin{align*} &\mathbf E\sup_{\tau\in[0,t]}\Big|\int_{0}^{\tau}\int_{Z}\Big(\big|k\big(s,X(s-\gamma(s)),z\big)\big|_{\mathbb H}^{2}+2\big\langle X(s-),k\big(s,X(s-\gamma(s)),z\big)\big\rangle_{\mathbb H}\Big)\widetilde{\eta}(ds,dz)\Big|\\ &\leq C\mathbf E\int_{0}^{t}\int_{Z}\big|k\big(\tau,X(\tau-\gamma(\tau)),z\big)\big|_{\mathbb H}^{2}\lambda(dz)d\tau+\frac{1}{4}\mathbf E\sup_{\tau\in[0,t]}|X(\tau)|_{\mathbb H}^{2}. \end{align*}

Proof. Set \[J_t:=\int_{0}^{\tau}\int_{Z}\Big(\|\gamma\big(s,X(s-\rho(s)),z\big)\|_{\mathbb H}^{2}+2\big\langle X(s-),\gamma\big(s,X(s-\rho(s)),z\big)\big\rangle_{\mathbb H}\Big)\widetilde{\eta}(ds,dz).\] Then, \begin{align*} [J,J]_{t}^{\frac{1}{2}} &=\Big\{\sum_{s\in D_p,s\leq t}\Big(|\gamma\big(s,X(s-\rho(s)),p(s)\big)|_{\mathbb H}^{2}+2\big\langle X(s-),\gamma\big(s,X(s-\rho(s)),p(s)\big)\big\rangle_{\mathbb H}\Big)^2\Big\}^{\frac{1}{2}}\\ &\leq C\Big(\sum_{s\in D_p,s\leq t}\big|\gamma\big(s,X(s-\rho(s)),p(s)\big)\big|_{\mathbb H}^{4}\Big)^{\frac{1}{2}} +C\Big(\sum_{s\in D_p,s\leq t}\big|X(s-)\big|_{\mathbb H}^{2}\big|\gamma\big(s,X(s-\rho(s)),p(s)\big)\big|_{\mathbb H}^{2}\Big)^{\frac{1}{2}}\\ &\leq C\sum_{s\in D_p,s\leq t}\big|\gamma\big(s,X(s-\rho(s)),p(s)\big)\big|_{\mathbb H}^{2} +C\sup_{0\leq s\leq t}(|X(s-)|_{\mathbb H})\Big(\sum_{s\in D_p,s\leq t}\big|\gamma\big(s,X(s-\rho(s)),p(s)\big)\big|_{\mathbb H}^{2}\Big)^{\frac{1}{2}}\\ &\leq C\sum_{s\in D_p,s\leq t}\big|\gamma\big(s,X(s-\rho(s)),p(s)\big)\big|_{\mathbb H}^{2}+\frac{1}{4}\sup_{0\leq s\leq t}(|X(s-)|_{\mathbb H}^{2}). \end{align*} By Burkholder-Davis-Gundy inequality [43], we obtain that \begin{align*} \mathbf E[\sup_{\tau\in[0,t]}|J_\tau|]&\leq C\mathbf E([J,J]_{t}^{\frac{1}{2}})\\ &\leq C\mathbf E\Big(\sum_{s\in D_p,s\leq t}\big|\gamma\big(s,X(s-\rho(s)),p(s)\big)\big|_{\mathbb H}^{2}\Big)+ \frac{1}{4}\mathbf E\Big(\sup_{0\leq s\leq t}(|X(s-)|_{\mathbb H}^{2})\Big)\\ &=C\mathbf E\int_{0}^{t}\int_{Z}\big|k\big(\tau,X(\tau-\gamma(\tau)),z\big)\big|_{\mathbb H}^{2}\lambda(dz)d\tau+\frac{1}{4}\mathbf E\sup_{\tau\in[0,t]}|X(\tau)|_{\mathbb H}^{2}. \end{align*} The proof is therefore complete.

We have the following theorem:

Theorem 3. Assume that all the assumptions of Theorem 2 are satisfied. Then, any weak solution \(X(t)\) to (2) converges to the stationary solution \(u_\infty\) of (4) almost surely exponentially.

Proof. Let \(n_1,n_2\) and \(n_3\) be positive integers such that \[n_1-\rho(n_1)\geq n_1-r\geq 1,\quad n_2-\delta(n_2)\geq n_2-r\geq 1,\quad n_3-\gamma(n_3)\geq n_3-r\geq 1.\] Set \(n=\max\{n_1,n_2,n_3\}\). By the Itô formula, it follows for any \(t\geq n,\) \begin{align*} |X(t)-u_\infty|^2 &=|X(n)-u_\infty|^2-2\int_{n}^{t}\langle\nu A(X(s)-u_\infty),X(s)-u_\infty\rangle ds-2\int_{n}^{t}\langle B(X(s))-B(u_\infty),X(s)-u_\infty\rangle ds\\ &\;\;\;+2\int_{n}^{t}\langle g(X(s-\rho(s)))-g(u_\infty), X(s)-u_\infty\rangle ds+\int_{n}^{t}\|\sigma(s,X(s-\delta(s)))\|_{\mathcal{L}_{2}^{0}}^{2}ds\\ &\;\;\;+2\int_{n}^{t}\langle X(s)-u_\infty,\sigma(s,X(s-\delta(s))) dW(s)\rangle +\int_{n}^{t}\int_{Z}|k(s,X(s-\gamma(s)),z)|^2\lambda(dz)ds\\ &\;\;\;+\int_{n}^{t}\int_{Z}\big[|k(s,X(s-\gamma(s)),z)|^2+2\int_{n}^{t}\langle X(s)-u_\infty,k(s,X(s-\gamma(s)),z)\rangle \big] \widetilde{\eta}(ds,dz). \end{align*} In view of Burkholder-Davis-Gundy inequality and the Young inequality, we have

\begin{equation} \label{eq3.12} \begin{aligned} &2\mathbf E\Big[\sup_{t\in[n,n+1]}\int_{n}^{t}\langle X(s)-u_\infty,\sigma(s,X(s-\rho(s)))dW(s)\rangle\Big]\\ &\leq 8\mathbf E\Big[\int_{n}^{n+1}|X(s)-u_\infty|^{2}\|\sigma(s,X(s-\rho(s)))\|_{\mathcal{L}_{2}^{0}}^{2}ds\Big]^{\frac{1}{2}}\\ &\leq \frac{1}{2}\mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big)+32\int_{n}^{n+1}\mathbf E\|\sigma(s,X(s-\rho(s)))\|_{\mathcal{L}_{2}^{0}}^{2}ds. \end{aligned} \end{equation}
(15)
Applying Lemma 2, for any \(t\geq 0\) we can get \begin{align*} &\mathbf E\sup_{t\in[n,n+1]}\int_{n}^{t}\int_{Z}\Big(\big|k\big(s,X(s-\gamma(s)),z\big)\big|^{2}+2\big\langle X(s-u_\infty),k\big(s,X(s-\gamma(s)),z\big)\big\rangle\Big)\widetilde{\eta}(ds,dz)\\ &\leq C\mathbf E\int_{n}^{n+1}\int_{Z}\big|k\big(s,X(s-\gamma(s)),z\big)\big|^{2}\lambda(dz)ds+\frac{1}{4}\mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big). \end{align*} Hence, \begin{align*} \mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big) &\leq \mathbf E|X(n)-u_\infty|^2+\Big(2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-2\nu\lambda_1\Big)\int_{n}^{n+1}\mathbf E|X(s)-u_\infty|^2ds\\ &\quad+\upsilon C_{g}^{2}\int_{n}^{n+1}\mathbf E|X(s-\rho(s))-u_\infty|^2ds+33\int_{n}^{n+1}\mathbf E\|\sigma(s,X(s-\delta(s)))\|_{\mathcal{L}_{2}^{0}}^{2}ds\\ &\quad+(C+1)\mathbf E\int_{n}^{n+1}\int_{Z}\big|k\big(s,X(s-\gamma(s)),z\big)\big|^{2}\lambda(dz)ds+\frac{3}{4}\mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big). \end{align*} This implies that \begin{align*} \mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big) &\leq 4\mathbf E|X(n)-u_\infty|^2+4\Big(2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-2\nu\lambda_1\Big)\int_{n}^{n+1}\mathbf E|X(s)-u_\infty|^2ds\\ &\quad+4\upsilon C_{g}^{2}\int_{n}^{n+1}\mathbf E|X(s-\rho(s))-u_\infty|^2ds\\ &\quad+132\int_{n}^{n+1}\big[\alpha_1(s)+(\beta_1+\gamma_1(s))\mathbf E|X(s-\delta(s))-u_\infty|^2\big]ds\\ &\quad+4(C+1)\int_{n}^{n+1}\big[\alpha_2(s)+(\beta_2+\gamma_2(s))\mathbf E|X(s-\gamma(s))-u_\infty|^2\big]ds \end{align*} On the other hand, from Theorem 2, it is easy to show that \begin{align*} \mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big) \leq 4M_0e^{-cn}+4\int_{n}^{n+1}M_0e^{-cs}[\gamma^\ast(s)+\alpha^\ast(s)e^{cs}]ds, \end{align*} where \begin{eqnarray*} \alpha^\ast(t):=33\alpha_1(t)+4(C+1)\alpha_2(t), \end{eqnarray*} \begin{eqnarray*} \gamma^\ast(t):=2C(D)\sqrt{\lambda_1}\|u_\infty\|+\frac{1}{\upsilon}-2\nu\lambda_1+\big[\upsilon C_{g}^{2}+33(\beta_1+\gamma_1(t))+(C+1)(\beta_2+\gamma_2(t))\big]e^{cr}. \end{eqnarray*} In view of assumption \((\mathbf {H4})\), there exists a positive constant \(\Lambda\) such that we obtain that \[\mathbf E\Big(\sup_{t\in[n,n+1]}|X(t)-u_\infty|^{2}\Big)\leq 4M_0e^{-cn}\Big(1+\frac{\Lambda}{c}\Big).\] Let \(\epsilon_n>0\) be any fixed positive real number. Then by Chebychev's inequality, we deduce that \[\mathbf P\Big\{\sup_{t\in[n,n+1]}|X(t)-u_\infty|>\epsilon_n\Big\}\leq \frac{4M_0e^{-cn}\Big(1+\frac{\Lambda}{c}\Big)}{\epsilon_{n}^{2}}.\] Therefore, since \(\epsilon_n\) is any fixed real number, let \(\epsilon_n=e^{-\frac{(c-\varepsilon) n}{4}}\), where \(\varepsilon\in (0,c)\). Then by the Borel-Cantelli lemma [44], we can yield that \[\overline{\lim}_{t\rightarrow\infty}\frac{\log|X(t)-u_\infty|}{t}\leq -\frac{c-\varepsilon}{4},\quad a.s..\] Letting \(\varepsilon\rightarrow 0^+\), this completes the proof of the theorem.

Remark 1. We consider a special case of the system (1) with variable delays when \(\rho\equiv\delta\) and \(\gamma\equiv 0\). That is, our system (1) reduces to the following system

\begin{equation} \label{eq3.13} \begin{cases} dX=\big[\nu \triangle X-\langle X,\nabla \rangle X-\nabla p+f(t)+g(X(t-\rho(t)))\big]dt +\sigma(X(t-\rho(t)))dW(t) \\ \text{div \(X=0\)} \quad in \quad (0,+\infty)\times D,\quad X(t,x)=0 \quad on \quad(0,+\infty)\times \partial D,\quad \\ X(0,x)=X_{0}(x), \quad X(t,x)=\phi(t,x), \quad (t,x)\in (-r,0)\times D. \end{cases} \end{equation}
(16)
The system (16) has been recently studied by Chen [22]. Furthermore, Wan and Zhou [23] discussed the following system \begin{equation*} \begin{cases} dX(t)=\big[-\nu AX(t)-B(X(t))+f(X(t))+g(X(\rho(t)))\big]dt +\sigma(t,X(t))dW(t), \\ \text{div \(X=0\)} \quad in \quad (0,+\infty)\times D,\quad X(t,x)=0 \quad on \quad(0,+\infty)\times \partial D,\quad \\ X(t,x)=\phi(t,x), \text{ for \(x\in D\), and \(t\in [-r,0]\), with \(r>0\)}, \end{cases} \end{equation*} and Caraballo et al., [7] also studied the stability of the stationary solutions of the following stochastic 2D Navier-Stokes without memory: \begin{equation*} \begin{cases} dX(t)=\big[-\nu AX(t)-B(X(t))+g(X(t))\big]dt +\sigma(t,X(t))dW(t), \\ \text{div \(X=0\)} \quad in \quad (0,+\infty)\times D,\quad X(t,x)=0 \quad on \quad(0,+\infty)\times \partial D,\\ X(0,x)=X_{0}(x),\quad x\in D. \end{cases} \end{equation*} By using method in our paper, the conclusions of some theorems in the works [7,22,23] also easily obtained. Obviously, our work have extended the asymptotic behavior results of above works to cover a class of much more general stochastic 2D Navier-Stokes equations with memory and discontinuous multiplicative noise.

Remark 2. If \(\sigma\equiv 0\) in the system (16), then by utilizing the direct method, in [20], Caraballo and Real have considered the asymptotic behavior for the weak solutions and Taniguchi [35] have investigated exponential stability of energy solutions to 2D stochastic functional Navier-Stokes equation perturbed by the Lévy process. However, unlike the works [20,35], we need not require the function \(\rho(t)\) to be differentiable and satisfies \(0\leq \rho'(t)< 1\). Therefore, our results extend and improve the one of Caraballo and Real [20] and Taniguchi [35].

Conflict of Interests

The author declares no conflict of interest.

References

  1. Lions, J. L. (1969). Quelques méthodes de résolution des problemes aux limites non linéaires. Springer. [Google Scholor]
  2. Temam, R. (1979). Navier-Stokes equations, Theory and Numerical Analysis, Second edition. North-Holland, Amsterdam. [Google Scholor]
  3. Rosa, R. (1998). The global attractor for the 2D Navier-Stokes flow on some unbounded domains. Nonlinear Analysis, 32(1), 71-86. [Google Scholor]
  4. Leray, J. (1933). Etude de diverses équations intégrales non linéaires et de quelques problèmes que pose l’hydrodynamique. Journal de Mathématiques Pures et Appliquées., 12, 1-82. [Google Scholor]
  5. Landau, L. D., & Lifshitz, E. M. (1959). Fluid Mechanics, Vol. 6 of Course of Theoretical Physics, Pergamon, Oxford. [Google Scholor]
  6. Bensoussan, A., & Temam, R. (1973). Equations stochastiques du type Navier-Stokes. Journal of Functional Analysis, 13(2), 195-222. [Google Scholor]
  7. Caraballo, T., Langa, J. A., & Taniguchi, T. (2002). The exponential behaviour and stabilizability of stochastic 2D-Navier-Stokes equations. Journal of Differential Equations, 179(2), 714-737. [Google Scholor]
  8. Da Prato, G., & Zabczyk, J. (1992). Stochastic Equations in Infinite Dimensions. Cambridge University Press, UK. [Google Scholor]
  9. Flandoli, F., & Gatarek, D. (1995). Martingale and stationary solutions for stochastic Navier-Stokes equations. Probability Theory and Related Fields, 102(3), 367-391. [Google Scholor]
  10. Taniguchi, T. (2011). The existence of energy solutions to 2-dimensional non-Lipschitz stochastic Navier-Stokes equations in unbounded domains. Journal of Differential Equations, 251(12), 3329-3362. [Google Scholor]
  11. Caraballo, T., & Shaikhet, L. (2014). Stability of delay evolution equations with stochastic perturbations. Communications on Pure and Applied Analysis, 13(5), 2095-2113.[Google Scholor]
  12. Guzzo, S. M., & Planas, G. (2015). Existence of solutions for a class of Navier-Stokes equations with infinite delay. Advanced Nonlinear Studies, 94(4), 8401-855. [Google Scholor]
  13. Hale, J. K., Lunel, S. M. V., Verduyn, L. S., & Lunel, S. M. V. (1993). Introduction to functional differential equations (Vol. 99). Springer Science & Business Media. [Google Scholor]
  14. Huan, D. D. (2015). On the controllability of nonlocal second-order impulsive neutral stochastic integro-differential equations with infinite delay. Asian Journal of Control, 17(4), 1-10. [Google Scholor]
  15. Huan, D. D., & Gao, H. (2015). A note on the existence of stochastic integro-differential equations with memory. Mathematical Methods in the Applied Sciences, 38(11), 2105-2119. [Google Scholor]
  16. Mao, X. (1997). Stochastic Differential Equations and Applications. [Google Scholor]
  17. Caraballo, T., & Real, J. (2004). Attractors for 2D-Navier-Stokes models with delays. Journal of Differential Equations, 205(2), 271-297. [Google Scholor]
  18. García-Luengo, J., Marín-Rubio, P., & Real, J.(2013). Pullback attractors for 2D Navier-Stokes equations with delays and their regularity. Advanced Nonlinear Studies, 13(2), 331-357. [Google Scholor]
  19. Marín-Rubio, P., & Real, J. (2007). Attractors for 2D-Navier-Stokes equations with delays on some unbounded domains. Nonlinear Analysis, 67(10), 2784-2799. [Google Scholor]
  20. Caraballo, T., & Real, J. (2003). Asymptotic behaviour of two-dimensional Navier-Stokes equations with delays. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 459(2040), 3181-3194. [Google Scholor]
  21. Planas, G., & Hernández, E. (2008). Asymptotic behaviour of two-dimensional time-delayed Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 21(4), 1245-1258. [Google Scholor]
  22. Chen, H. (2012). Asymptotic behavior of stochastic two-dimensional Navier-Stokes equations with delays. Proceedings-Mathematical Sciences, 122(2), 283-295. [Google Scholor]
  23. Wan, L., & Zhou, Q. (2011). Asymptotic behaviors of stochastic two-dimensional Navier-Stokes equations with finite memory. Journal of Mathematical Physics, 52(4), 042703. [Google Scholor]
  24. Liu, L. F., & Caraballo, T. (2018). Analysis of a Stochastic 2D-Navier-Stokes Model with Infinite Delay. Journal of Dynamics and Differential Equations, 2018, 1-26. [Google Scholor]
  25. Razafimandimby, P. A., & Sango, M. (2010). Asymptotic behavior of solutions of stochastic evolution equations for second grade fluids. Comptes Rendus Mathematique, 348(13-14), 787-790. [Google Scholor]
  26. Razafimandimby, P. A., & Sango, M. (2012). On the exponential behaviour of stochastic evolution equations for non-Newtonian fluids. Applicable Analysis, 91(12), 2217-2233. [Google Scholor]
  27. Bessaih, H., Hausenblas, E., & Razafimandimby, P. A. (2015). Strong solutions to stochastic hydrodynamical systems with multiplicative noise of jump type. Nonlinear Differential Equations and Applications NoDEA, 22(6), 1661-1697. [Google Scholor]
  28. Huan, D., & Agarwal, A. (2014). Global attracting and quasi-invariant sets for stochastic Volterra-Levin equations with jumps. Dynamics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis, 21, 343-353.[Google Scholor]
  29. Huan, D. D., & Agarwal, R. P. (2015). Neutral SFDEs with jumps under Caratheodory conditions. Dynamics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis, 22, 81-93. [Google Scholor]
  30. Huan, D. D., & Agarwal, R. P. (2018). Asymptotic behavior, attracting and quasi-invariant sets for impulsive neutral SPFDE driven by Lévy noise. Stochastics and Dynamics, 18 (01), 1-21. [Google Scholor]
  31. Huan, D. D., & Gao, H. (2015). Controllability of nonlocal second-order impulsive neutral stochastic functional integro-differential equations with delay and Poisson jumps. Cogent Engineering, 2(1), 1065585. [Google Scholor]
  32. Motyl, E.(2013). Stochastic Navier-Stokes equations driven by Lévy noise in unbounded 3D domains. Potential Analysis, 38(3), 863-912. [Google Scholor]
  33. Dong, Z., Li, W. V., & Zhai, J. (2012). Stationary weak solutions for stochastic 3D Navier-Stokes equations with Lévy noise. Stochastics and Dynamics, 12(01), 1150006. [Google Scholor]
  34. Brzezniak, Z., Hausenblas, E., & Zhu, J. (2013). 2D stochastic Navier-Stokes equations driven by jump noise. Nonlinear Analysis: Theory, Methods & Applications, 79, 122-139. [Google Scholor]
  35. Taniguchi, T. (2012). The existence and asymptotic behaviour of energy solutions to stochastic 2D functional Navier-Stokes equations driven by Lévy processes. Journal of Mathematical Analysis and Applications, 385(2), 634-654. [Google Scholor]
  36. Peszat, S., & Zabczyk, J. (2007). Stochastic partial differential equations with Lévy noise: An evolution equation approach (Vol. 113). Cambridge University Press. [Google Scholor]
  37. Protter, P. E. (2004). Stochastic integration and differential equations, Second edition. Springer, New York. [Google Scholor]
  38. Temam, R. (1988). Infinite Dimensional Dynamical Systems in Mechanics and Physics. Springer-Verlag, New York/Berlin. [Google Scholor]
  39. Constantin, P., & Foias, C. (1988). Navier-stokes equations. University of Chicago Press. [Google Scholor]
  40. Gyöngy, I., & Krylov, N. V. (1981/82). On stochastic equations with respect to seminartingales II. Itô formula in Banach spaces. Stochastics, 6, 153-173.[Google Scholor]
  41. Liu, K. (2006). Stability of Infinite Dimensional Stochastic Differential Equations with Applications. Chapman and Hall, CRC, London, UK. [Google Scholor]
  42. Wan, L., & Duan, J. (2008). Exponential stability of non-autonomous stochastic partial differential equations with finite memory. Statistics & Probability Letters, 78(5), 490-498. [Google Scholor]
  43. Kallenberg, O. (2002). Foundations of Modern Probability. Berlin, Springer. [Google Scholor]
  44. Ash, R. B. (2000). Probability and Measure Theory, Second edition. Academic Press, San Diego. [Google Scholor]
]]>
Stochastic dynamic for an extensible beam equation with localized nonlinear damping and linear memory https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/stochastic-dynamic-for-an-extensible-beam-equation-with-localized-nonlinear-damping-and-linear-memory/ Mon, 30 Nov 2020 10:50:13 +0000 https://old.pisrt.org/?p=4729
OMS-Vol. 4 (2020), Issue 1, pp. 400 - 416 Open Access Full-Text PDF
Abdelmajid Ali Dafallah, Fadlallah Mustafa Mosa, Mohamed Y. A. Bakhet, Eshag Mohamed Ahmed
Abstract: In this paper, we concerned to prove the existence of a random attractor for the stochastic dynamical system generated by the extensible beam equation with localized non-linear damping and linear memory defined on bounded domain. First we investigate the existence and uniqueness of solutions, bounded absorbing set, then the asymptotic compactness. Longtime behavior of solutions is analyzed. In particular, in the non-autonomous case, the existence of a random attractor attractors for solutions is achieved.
]]>

Open Journal of Mathematical Sciences

Stochastic dynamic for an extensible beam equation with localized nonlinear damping and linear memory

Abdelmajid Ali Dafallah\(^1\), Fadlallah Mustafa Mosa, Mohamed Y. A. Bakhet, Eshag Mohamed Ahmed
Faculty of Petroleum and Hydrology Engineering, Alsalam University, Almugled, Sudan.; (A.A.D)
Department of Mathematics and physics, Faculty of Education University of Kassala Kassala, Sudan.; (F.M.M)
Department of Mathematics, College of Education Rumbek University of Science and Technology Rumbek, South Sudan.; (M.Y.A.B)
Faculty of Pure and Applied Sciences, International University of Africa, Khartoum, Sudan.; (E.M.A)
\(^{1}\)Corresponding Author: majid_dafallah@yahoo.com

Abstract

In this paper, we concerned to prove the existence of a random attractor for the stochastic dynamical system generated by the extensible beam equation with localized non-linear damping and linear memory defined on bounded domain. First we investigate the existence and uniqueness of solutions, bounded absorbing set, then the asymptotic compactness. Longtime behavior of solutions is analyzed. In particular, in the non-autonomous case, the existence of a random attractor attractors for solutions is achieved.

Keywords:

Beam equation, memory, nonlinear damping, Random Dynamical System, random attractor.

1. Introduction

We consider the following extensible beam equation with localized non-linear damping and linear memory on a bounded domain:

\begin{equation} \left\{\begin{aligned} &\ u_{tt}+\Delta^{2}u-k(0)(1+\int_{\Omega}|\nabla u|^{2}dx)\Delta u-\int_{0}^{\infty}k'(s)\Delta u(t-s)ds+a(x)g(u_{t})+f(u)=q(x,t)+\kappa \sum_{j=1}^{m}h_{j}\dot{W}_{j}(t),\\ &\ u=\frac{\partial u}{\partial\Gamma} =0,~~x\in\partial\Gamma,~t\in\mathbb{R},\\ &\ u(\tau,x)=u_{0}(\tau,x),~~u_{t}(\tau,x)=u_{1}(\tau,x),~ x\in\Gamma,~\tau\in\mathbb{R},\end{aligned}\right.\label{1.1} \end{equation}
(1)
where \(\Gamma \) is a bounded domain of \(\mathbb{R}^{n},~~k(0), k(\infty) >0 \) and \(k'(s)\leq 0 \) for every \(s\in~\mathbb{R}^{+},~~\varepsilon \) is a positive constant. The given function \(g(x,t)\in£_{loc}^{2}(\mathbb{R},L^{2}(\Gamma)) \) is a external force depending on t, \(h_{j}\in H^{2}(\Gamma) \) and \(W(t) \) is an independent two sided real-valued Wiener processes on probability space. The function \(a(x) \) satisfies
\begin{equation}\label{1.2}a(x)\in L^{\infty}(\Gamma),a(x)\geq\alpha_{0}>0,~~~in \;\Gamma \end{equation}
(2)
where \(\alpha_{0} \) is constant. The function \(f \in C^{1}(\mathbb{R}) \) satisfies
\begin{equation}\begin{cases} (\mathbf{A}_{1}):~~|f'(s)|\leq C_{1}(1+|s|^{\gamma-1})~,\forall~s\in\mathbb{R},\\ (\mathbf{A}_{2}):~~\liminf_{|s|\rightarrow\infty}\frac{|f(s)|}{s}>-\lambda_{1},\\ (\mathbf{A}_{3}):~~F(s)=\int_{0}^{s}f(r)dr\geq C_{2}(|s|^{\gamma+1}-1),\\ (\mathbf{A}_{4}):~~sf(s)\geq C_{3}(F(s)-1),\\ (\mathbf{A}_{5}):~~C_{2}(|s|^{\gamma+1}-1)\leq F(s)\leq \frac{1}{C_{3}}(sf(s)+C_{3}), \end{cases}\label{1.3}\end{equation}
(3)
where \(C_{i} \) are positive constants \((i=1,2,3,4) \), \(1\leq \gamma\leq\frac{n+2}{n-2},~~ n\geq3 \) and \(\lambda_{1} \) is the best constant in the Poincáre-type inequality \[\lambda_{1}\int_{\Omega}|u|^{2}dx\leq\int_{\Omega}|\nabla u|^{2}dx. \] The damping function \(g \) satisfies \(|g'(s)|\geq0 \), g(s) strictly increasing, and
\begin{equation} |h(0)|=0,~~0< \alpha_{1}\leq|h'(s)|\leq \alpha_{2}< \infty.\label{1.4}\end{equation}
(4)
As like to [1,2], we define a new variable
\begin{equation}\begin{aligned} &\ \eta(x,t,s)=u(x,t)-u(x,t-s),\;\; \eta_{t}=\frac{\partial}{\partial t}\eta~,~\eta_{s}=\frac{\partial}{\partial s}\eta. \end{aligned} \label{1.5}\end{equation}
(5)
Let \(\mu(s)=k'(s) \). Equation (1) transforms into the following system:
\begin{equation}\left\{\begin{aligned} &\ u_{tt}+\Delta^{2}u-(1+k(0)\int_{\Omega}|\nabla u|^{2}dx)\Delta u-\int_{0}^{\infty}\mu(s)\Delta \eta(s)ds+a(x)g(u_{t})+f(u) =q(x,t)+\kappa \sum_{j=1}^{m}h_{j}\dot{W}_{j};\\ &\ \eta_{t}=-\eta_{s}+u_{t},\\ &\ u(t,x)=0,~~ x\in\partial\Gamma,~t>0;\\ &\ \eta(x,t,s)=0,~~ x\in\partial\Gamma,~~t>0,~~s\in~\mathbb{R}^{+};\\ &\ u(\tau,x)=u_{0}(x),u_{t}(\tau,x)=u_{1}(x), x\in\Gamma;\\ &\ \eta(x,\tau,s)=\eta_{0}(x,s)=u_{0}(x)-u_{0}(x,-s), x\in\Gamma,~s\in~\mathbb{R}^{+}. \end{aligned}\right.~\label{1.6}\end{equation}
(6)
The following hypotheses are necessary to obtain our main results, infer to [3,4,5].
  • (a) The memory kernel \(\mu \) is required to satisfy the following hypotheses hold:
    \begin{equation}\begin{cases} (\mathbf{H}_{1}):~~ \mu\in\mathbb{C}^{1}(\mathbb{R^{+}})\cap\mathbb{L}^{1}(\mathbb{R^{+}}),\\ (\mathbf{H}_{2}):~~\mu(s)\geq0 ,\mu'(s)\leq0~, \forall s\in\mathbb{R^{+}} ,\\ (\mathbf{H}_{3}):~~ \mu'(s)+k_{1}\mu(s)\leq 0~, \forall s\in\mathbb{R^{+}} and ~\sigma> 0,\\ (\mathbf{H}_{4}):~~m_{0}:=\int_{0}^{\infty}\mu(s)ds< \infty.\end{cases}~ .~\label{1.7}\end{equation}
    (7)
  • (b) We need the following condition on \(q(x,t)\in£_{loc}^{2}(\mathbb{R},L^{2}(\Gamma)) \), there exists a positive constant \(\sigma \) satisfy that
    \begin{equation}\begin{cases} (\mathbf{Q}_{1}):~~\int_{-\infty}^{\tau}e^{\sigma r}\|q(\cdot,r)\|^{2}dr < \infty,~~\forall~~ r\in\mathbb{R},\\ (\mathbf{Q}_{2}):~~\|q(x,t)\|^{2}=\sup_{r\in\mathbb{R}}\int\|q(.,r\|^{2}ds < \infty~~\forall~~ r\in\mathbb{R},\\ (\mathbf{Q}_{3}):~~\lim_{k\rightarrow{\infty}} \int_{-\infty}^{\tau} \int_{|x|\geq k}e^{\sigma r}|g(x,r)|^{2}dx dr =0,~\forall\tau\in \mathbb{R}.\end{cases}\label{1.8}\end{equation}
    (8)
The basic concepts and notions of random attractors for the infinite dimensional was recently presented by in [6,7,8,9]. A random attractor of RDS is a measurable and compact invariant random set attracting all orbits. whilst such an attracting set exists, it is the smallest attracting compact set and the largest invariant set. In recent years, a random attractor for autonomous and non-autonomous stochastic dynamical systems have been studied by many authors, see for example [10,11,12,13,14,15,16] and the references therein.

In the deterministic case; that is, \(\kappa= 0 \) in (1), the asymptotic behavior of the solution for global attractors an extensible beam equation with localized nonlinear damping with memory has been studied in [5,17,18,19].

In [20], for the case of \(\mu=0 \) in (1), the authors investigated the existence of random attractor for the stochastic an extensible beam equation with localized nonlinear damping without memory. But, there were no results even for the bounded case. While it is far just our interest in this paper. To the best of our knowledge, the dynamics of system (1) involving but essential difficulties in showing compactness by using the uniform estimates on the tails of solution. Motivated by a similar technique of [16].

The rest of the paper is organized as follows. In Section 2, we recall some basic concepts and properties for general random dynamical systems. In Section 3, we first provide some basic settings about (1) and show that it generates a random dynamical system in proper function space. In Section 4, we prove the existence of a unique random attractor of the random dynamical system by bounded absorbing set and using a compact measurable pullback attracting set.

2. Preliminaries

In this section, we recall some basic concepts related to random attractors for stochastic dynamical systems. The readers are referred to [6,7,8] for more details. Which are crucial for getting our main results. Let \((\Omega,\mathcal{F},P ) \) be a probability space and \((X,d) \) be a Polish space with the Borel \(\sigma \)-algebra \(B(X) \). The distance between \(x\in\ X \) and \(B{\subseteq X} \) is denoted by \(d(x,B) \). If \(B{\subseteq X} \) and \(C{\subseteq X} \), the Hausdorff semi-distance from \(B \) to \(C \) is denoted by way of \( d(B,C)=\sup_{x\in B}d(x,C) \).

Definition 1. ( \(\Omega,\mathcal{F},P,(\theta_{t})_{t\in\mathbb{R}} \)) is called a metric dynamical system if \(\theta:\mathbb{R} \times \Omega \longrightarrow \Omega \) is \((\mathcal{B}(\mathbb{R}) \times \mathcal{F},\mathcal{F}) \)-measurable, \(\theta_{0} \) is the identity on \(\Omega \), \(\theta_{s+t} \) = \(\theta_{t}\circ\theta_{s} \) for all s,t \(\in\mathbb{R} \) and \(\theta_{0} \)P = P for all t \(\in\mathbb{R} \).

Definition 2. A mapping \(\Phi(t,\tau~,\omega,x):\mathbb{R}^{+}\times \mathbb{R} \times\Omega \times X\rightarrow X \) is called continuous cocycle on \(X \) over \(\mathbb{R} \) and \((\Omega ,\mathcal{F},P,(\theta_{t})_{t\in \mathbb{R}}) \), if for all \(\tau\in\mathbb{R}~,\omega\in\Omega \) and \(t,s \in \mathbb{R}^{+} \), the following conditions are satisfied:

  • i) \(\Phi(t,\tau,\omega,x):\mathbb{R}^{+}\times \mathbb{R}\times\Omega \times X\rightarrow X \) is a \((\mathcal{B}(\mathbb{R}^{+})\times \mathcal{F},\mathcal{B}(\mathbb{R})) \) measurable        mapping,
  • ii) \(\Phi(0,\tau,\omega,x) \) is identity on \(X, \)
  • iii) \( \Phi(t+s,\tau,\omega,x)=\Phi(t,\tau+s,\theta_{s}\omega,,x)\circ\Phi(s,\tau,\omega,,x) \),
  • iv) \(\Phi(t,\tau,\omega,x):X \rightarrow X \) is continuous.

Definition 3. Let \(2^{X} \) be the collection of all subsets of X, a set valued mapping \((\tau,\omega)\mapsto \mathcal{D}(t~\omega):\mathbb{R}\times\Omega\mapsto 2^{X} \) is called measurable with respect to \(\mathcal{F} \) in \(\Omega \) if \(\mathcal{D}(t~\omega) \) is a (usually closed) nonempty subset of \(X \) and the mapping \(\omega\in\Omega\mapsto d(X,B(\tau,\omega)) \) is \((\mathcal{F},\mathcal{B}(\mathbb{R})) \)-measurable for every fixed \(x\in X \) and \(\tau\in \mathbb{R} \). Let \(B={B(t,\omega)\in \mathcal{D}(t,\omega):\tau\in \mathbb{R} ,\omega\in\Omega} \) is called a random set.

Definition 4. A random bounded set \(B=\{B(\tau,\omega):\tau\in \mathbb{R},\omega\in\Omega\}\in \mathcal{D} \) of \(X \) is called tempered with respect to \(\{\theta(t)\}_{t\in\Omega} \), if for p-a.e \(\omega\in\Omega~, \) \[\lim_{t~\mapsto\infty}~ e^{-\beta t}~d(B(\theta_{-t} \omega)) =0 ~,~ \forall~ \beta~ > 0~, \] where \(d(B)=\sup_{x\in B}\|x\|_{X}. \)

Definition 5. Let \(\mathcal{D} \) be a collection of random subset of \(X \) and \(K=\{K(\tau,\omega):\tau\in \mathbb{R},\omega\in\Omega\}\in \mathcal{D} \), then \(K \) is called an absorbing set of \(\Phi\in \mathcal{D} \) if for all \(\tau\in \mathbb{R}, \omega\in\Omega \) and \(B\in \mathcal{D} \), there exists, \(T=T(\tau,\omega,B)> 0 \) such that \[\Phi(t~,\tau,\theta_{-t}\omega,B(\tau,\theta_{-t}\omega))\subseteq K(\tau,\omega)~,~\forall~t~\geq T. \]

Definition 6. Let \(\mathcal{D} \) be a collection of random subset of \(X, \) the \(\Phi \) is said to be \(\mathcal{D} \)-pullback asymptotically compact in \(X \) if for p-a.e \(\omega\in\Omega \) , \(\{\Phi(t_{n}~,\theta_{-t_{n}}\omega~,x_{n})\}_{n=1}^{\infty} \) has a convergent subsequence in \(X \) when \(t_{n}\mapsto\infty \) and \(x_{n}\in{B(\theta_{-t_{n}}\omega)} \) with \(\{B(\omega)\}_{\omega\in\Omega}\in \mathcal{D} \).

Definition 7. Let \(\mathcal{D} \) be a collection of random subset of \(X \) and \(\mathcal{A}=\{\mathcal{A}(\tau,\omega):\tau\in \mathbb{R},\omega\in\Omega\}\in \mathcal{D} \), then \(\mathcal{A} \) is called a \(\mathcal{D} \)-random attractor (or \(\mathcal{D} \)-pullback attractor) for \(\Phi \), if the following conditions are satisfied for all \(t\in \mathbb{R}^{+},\tau\in \mathbb{R} \) and \(\omega\in\Omega \)

  • (i) \(\mathcal{A}(\tau,\omega) \) is compact, and \(\omega\mapsto d(x,\mathcal{A}(\omega)) \) is measurable for every \(x\in X \),
  • (ii) \({\mathcal{A}(\tau,\omega)} \) is invariant, that is \(\Phi(t,\tau,\omega,\mathcal{A}(\tau,\omega))=\mathcal{A}(\tau+t,\theta_{t}\omega),\forall~t\geq \tau, \)
  • (iii) \(\mathcal{A}(\tau,\omega) \) attracts every set in \(\mathcal{D} \), that is for every \(B=\{B(\tau,\omega):\tau\in \mathbb{R},\omega\in\Omega\}\in \mathcal{D} \), \(\lim_{t~\mapsto\infty}~d_{X}(\Phi(t,\tau,\theta_{-t}\omega,B(\tau,\theta_{-t}\omega)),\mathcal{A}(\tau,\omega))=0, \) where \(d_{X} \) is the Hausdorff semi-distance given by    \(d_{X}(Y,Z)=\sup_{y\in Y} \inf_{z\in Z}\|y-z\|_{X} \) for any \(Y\in X \) and \(Z\in X \).

Lemma 1. Let \(\mathcal{D} \) be a neighborhood-closed collection of \((\tau,\omega) \)-parameterized families of nonempty subsets of \(X \) and \(\Phi \) be a continuous cocycle on \(X \) over \(\mathbb{R} \) and \((\Omega,\digamma,P,(\theta_{t})_{t\in\mathbb{R}}) \). Then \(\Phi \) has a pullback \(\mathcal{D} \)-attractor \(\mathcal{A} \) in \(\mathcal{D} \) if and only if \(\Phi \) is pullback \(\mathcal{D} \)-asymptotically compact in \(X \) and \(\Phi \) has a closed, \(\digamma \)-measurable pullback \(\mathcal{D} \)-absorbing set \(K \in\mathcal{D} \), the unique pullback \(\mathcal{D} \)-attractor \(\mathcal{A}={\mathcal{A}(\tau~,\omega)} \) is given \[\mathcal{A}(\tau,\omega)=\mathbb{\bigcap}_{r\geq0}\overline{\mathbb{\bigcup}_{t\geq r}\Phi(t,\tau-t,\theta_{-t}\omega,K(\tau-t,\theta_{-t}\omega)})~ \tau\in\mathbb{R}~,\omega\in\Omega. \]

3. Existence and uniqueness of solution

In this Section, first, we collect some important results that will help to achieve our goal. Let \(A=\Delta^{2},~A^{\frac{1}{2}}=-\Delta \) and \(D(A)=\{u\in H^{4}:\Delta\in H_{0}^{1} \}\). We can define the powers \(A^{\nu} \) is Hilbert space and a norm hold \(D(A^{\frac{\nu}{4}})=\mathbf{V}_{\nu}=\|A^{\frac{\nu}{4}}u\|^{2},~\nu\in\mathbb{R} \). Especially, \(\mathbf{V}_{0}\hookrightarrow L^{2} \) and \(\mathbf{V}_{1}\hookrightarrow H^{2}\cap H_{0}^{1} \). We denote that the injection \(\mathbf{V}_{\nu_{1}}\hookrightarrow \mathbf{V}_{\nu_{2}} \) is compact embeddings, if \(\nu_{1} > \nu_{2} \) in conjunction with the generalized Poincaré inequality; \[\|u\|_{\nu+1}^{4}\geq\lambda_{1}\|u\|_{\nu}^{4}, \] where \(\lambda_{1} \) is the first eigenvalue of \(A. \) Additionally we outline the subsequent
\begin{equation}\begin{cases} (u,v)=\int_{\Gamma}uvdx=\|u\|\|v\|,\\ (u,u)=\|u\|^{2},\\ ((u,v))=\int_{\Gamma}\triangle u\triangle vdx=\|\triangle u\|\|\triangle v\|,\\ ((u,u))=\int_{\Gamma}\triangle u\triangle udx=\|\triangle u\|^{2}. \end{cases}\label{3.1} \end{equation}
(9)
Much like [18], for the memory kernel hypotheses \(\mu(\cdot) \), we suppose \(L^{2}_{\mu}(\mathbb{R}^{+};\mathbf{V}_{\nu}) \) the Hilbert space of function \(\eta:\mathbb{R}^{+}\longrightarrow\mathbf{V}_{\nu} \) endowed with the inner product and norm respectively,
\begin{equation}\begin{cases} (u,v)_{\mu,\nu}=\int_{0}^{\infty}\mu(s)(A^{\frac{\nu}{4}}u(s),A^{\frac{\nu}{4}}v(s))ds,\\ (\eta_{1},\eta_{2})_{\mu,\nu}=\int_{0}^{\infty}\mu(s)(A^{\frac{\nu}{4}}\eta_{1}(s),A^{\frac{\nu}{4}}\eta_{2}(s))ds,\\ \|\eta\|_{\mu,\nu}^{2}=(A^{\frac{\nu}{4}}\eta,A^{\frac{\nu}{4}}\eta)_{\mu}=\int_{0}^{\infty}\mu(s)\|\eta\|_{\nu}^{2}ds, \end{cases}\label{3.2}\end{equation}
(10)
specially, \(\|u\|_{\mu,\nu}^{2}=\|u\|_{\mu,1}^{2} \). Let, we define the product Hilbert space \(E=\mathbf{V}_{0}\times\mathbf{V}_{1}\times L_{\mu}^{2}(\mathbb{R}^{+};\mathbf{V}_{1}) \).

To convert the version of Problem (6) with a random perturbation term right into a deterministic one with a random parameter \(\omega \), we introduce an Ornstein-Uhlenbeck process driven by means of the Brownian motion, which satisfies the subsequent differential equation

\begin{equation}dz_{j}+\delta z_{j}dt=dW_{j}(t),~\label{3.3}\end{equation}
(11)
Its unique stationary solution is given by
\begin{equation}z_{j}(\theta_{t}\omega_{j})=-\delta\int_{-\infty}^{0}e^{ \delta s}(\theta_{t}\omega_{j})(s)ds,~ s\in \mathbb{R},~t\in\mathbb{R},~\omega_{j}\in\Omega.~\label{3.4}\end{equation}
(12)
From [6,16], it is recognized that the random variable \(|z_{j}(\omega_{j})| \) is tempered and there is an invariant set \(\bar{\Omega}\subseteq\Omega \) of full \(P \) measure such that \(z_{j}(\theta_{t}\omega_{j})=z_{j}(t,\omega_{j}) \) is continuous in \(t \), for each \(\omega\in\bar{\Omega} \). For comfort, we shall write \(\bar{\Omega} \) as \(\Omega \). It follows from Proposition 3.4 in [16], that for any \(\epsilon > 0 \), there exists a tempered characteristic \(\curlyvee(\omega)> 0 \) such that
\begin{equation}\sum_{j=1}^{m}(|z_{j}(\omega_{j})|^{2}+|z_{j}(\omega_{j})|^{\gamma+2})\leq\curlyvee(\omega) ,~\label{3.5}\end{equation}
(13)
where \(\curlyvee(\omega) \) satisfies for, p-a.e. \(\omega\in\Omega \),
\begin{equation}\curlyvee(\theta_{t}\omega)\leq e^{\varepsilon |t|} \curlyvee(\omega),~t\in\mathbb{R}.~\label{3.6}\end{equation}
(14)
Then, it follows from the above inequality, for p-a.e. \(\omega\in\Omega \),
\begin{equation}\sum_{j=1}^{m}(|z_{j}(\theta_{t}\omega_{j})|^{2}+|z_{j}(\theta_{t}\omega_{j})|^{\gamma+2})\leq e^{\varepsilon|t|} \curlyvee(\omega),~t\in\mathbb{R}.~\label{3.7}\end{equation}
(15)
Put \(\kappa h(x)z(\theta_{t}\omega)= \kappa\sum_{j=1}^{m}h_{j}z_{j}(\theta_{t}\omega_{j}) \), which solves \(dz+\delta zdt=\sum_{j=1}^{m}h_{j}\dot{W}_{j}(t) \).

Let \(v(t,\tau,x,\omega)=u_{t}+\varepsilon u- \kappa h(x)z(\theta_{t}\omega) \), we handy to reduce (6) to an evolution equation of the first-order in time random partial differential equation (RPDE):

\begin{equation}\left\{\begin{aligned} &\ u_{t}=v-\varepsilon u +\kappa h(x) z(\theta_{t}\omega),\\ &\ v_{t}-\varepsilon v+\varepsilon^{2}u+Au+\int_{0}^{\infty}\mu(s)A^\frac{1}{2} \eta(s)ds =(1+k(0)\int_{\Omega}|\nabla u|^{2}dx)A^\frac{1}{2} u-a(x)g(u_{t})-f(u)+g(x,t)+\varepsilon \kappa h(x)z(\theta_{t}\omega),\\ &\ \eta_{t}+\eta_{s}=-\varepsilon u +v+\kappa h(x) z(\theta_{t}\omega),\\ &\ u(x,\tau)=u_{0}(x),u_{t}(\tau,x)=u_{1}(x), x\in \Gamma,\\ &\ v(x,\tau)= v_{0}(x)=u_{1}(x)+\varepsilon u_{0}(x)-\kappa h(x)z(\theta_{t}\omega),\\ &\ \eta(x,\tau,s)= \eta_{0}=u_{0}(x)-u_{0}(x,-s), x\in \Gamma,~s\in~\mathbb{R}^{+}. \end{aligned}\right.\label{3.8} \end{equation}
(16)
Consequently the stochastic system for the system (16) becomes
\begin{equation} \left\{\begin{aligned} &\ \psi'+H(\psi)=Q(\psi,t,\omega)\\ &\ \psi(\tau,\omega)= (u_{0}(x),u_{1}(x)+ \varepsilon u_{0}(x)-\kappa h(x)z(\theta_{t}\omega),\eta_{0})^{\top}~,~\psi=(u,v,\eta)^{\top}, \end{aligned}\right.\label{3.9} \end{equation}
(17)
in which \(\psi= \left(% \begin{array}{cc} u \\ v \\ \eta\\ \end{array} \right), \) \(H(\psi)=\left(% \begin{array}{ccc} \varepsilon u -v \\ -\varepsilon v+\varepsilon^{2}u+Au+\int_{0}^{\infty}\mu(s)A^{\frac{1}{2}} \eta(s)ds \\ \varepsilon u -v + \eta_{s} \\ \end{array} \right) \) and \(Q(\psi,\omega,t)= \left(% \begin{array}{cc} \kappa h(x)z(\theta_{t}\omega) \\ (1+k(0)\int_{\Omega}|\nabla u|^{2}dx)\triangle u-a(x)g(u_{t})-f(u)+g(x,t)+\varepsilon \kappa h(x)z(\theta_{t}\omega)\\ \kappa h(x)z(\theta_{t}\omega) \\ \end{array} \right). \) By [21], we have the fact that \(H \) is the infinitesimal generators of \(\mathbf{C}^{0} \)-semigroup \(e^{Ht} \) on \(E(\Gamma) \). It is not difficult to check that the function \(Q(\psi,\omega,t) : E\rightarrow E \) is locally Lipschitz continuous with respect to \(\psi \) and bounded for each \(\omega\in\Omega \).

In order to obtain the random attractor of the Problem (17) has a unique solution in the mild sense, by the classical semigroup theory of existence and uniqueness of solutions of evolution differential equations [21], we get the following result.

Theorem 1. Let (2)-(5) and (7)-(8) hold. Then, for every \(\tau\in\mathbb{R},~\omega\in\Omega \) and \(\chi_{\tau}\in E(\Gamma) \), the Problem (17) has a unique solution \(\chi(t,\tau,\omega,\chi_{\tau}) \) which is continuous with respect to \((u_{0},v_{0},\eta_{0})^{\top}\in E(\Gamma) \) such that \(\chi_{\tau} \) and \(\chi(t,\tau,\omega,\chi_{\tau}) \) satisfies the integral equation

\begin{equation} \chi(t,\tau,\omega,\chi_{\tau})=e^{-H(t)}\chi_{\tau}(\omega)+\int_{0}^{t}e^{-H(t-r)}Q(\chi,r,\omega)dr. ~\label{3.10}\end{equation}
(18)
Moreover, \(\chi(t,\tau,\omega,\chi_{\tau}) \) is continuous in \(\chi_{\tau} \) and measurable in \(\omega \).

Theorem 2. Let (2)-(4) and (7)-(8) hold. Then, for any \(\tau\in\mathbb{R},~\omega\in\Omega \) and \(\chi_{\tau}\in E(\Gamma) \), such that \( \chi(t,\tau,\omega,\chi_{\tau})\in E(\Gamma) \) is a solution of the Problem (17) satisfy the properties of continuous random dynamical system over \(\mathbb{R} \) and ( \(\Omega,\mathcal{F},P,(\theta_{t})_{t\in\mathbb{R}} \)). We can show that for P-a.s. every \(\omega\in\Omega \), for all \(\mathrm{T}>0 \)

  • (1) if \(\chi_{\tau}(\omega)\in E \), then \(\chi(\mathrm{T},\omega,\chi_{\tau})=\chi(\mathrm{T},\omega,\chi_{\tau})\in C([t,t+\mathrm{T});E) \),
  • (2) \(\chi(t,\tau,\omega,\chi_{\tau}) \) is jointly continuous into \(t \) and measurable in \(\chi_{\tau}(\omega) \),
  • (3) the solution mapping of (18) holds the properties of continuous cocycle.
From the Theorem 1, we can define a continuous random dynamical system over \(\mathbb{R} \) and ( \(\Omega,\mathcal{F},P,(\theta_{t})_{t\in\mathbb{R}} \)), that is, \(\mathbf{\Phi}(t,\tau,\omega,\chi_{\tau}):\mathbb{R}\times\mathbb{R}^{+}\times\Omega \times E\mapsto E,\;\;t\geq\tau, \) such that
\begin{equation}\begin{cases} \mathbf{\Phi}(t,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega))=\chi(t,\tau,\theta_{-\tau} \omega,\chi_{\tau}(\theta_{-\tau}\omega)),\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \;\;\;\;\;\;\;=(u(t,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega)),v(t,\tau,\theta_{-\tau}\omega,\chi_{\ tau} (\theta_{-\tau}\omega)),\eta(t,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega),s))^{\top},\\ \mathbf{\Phi}(0,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega))=\chi(\tau,\tau,\theta_{-\tau} \omega,\chi_{\tau}(\theta_{-\tau}\omega)),\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\;\;\;\;\;\;\;\;\;\;\;=(u(\tau,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega)),v(\tau,\tau,\theta_{- \tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega)) ,\eta(\tau,\tau,\theta_{\tau}\omega,\chi_{\tau}(\theta_{\tau}\omega),s))^{\top},\\ \mathbf{\Phi}(\tau,\tau-t,\theta_{-t}\omega,\chi_{\tau}(\theta_{-\tau}\omega))=\chi(\tau,\tau-t,\theta_{-\tau} \omega,\chi_{\tau}(\theta_{-\tau}\omega)). \end{cases}~\label{3.11}\end{equation}
(19)
It generates a random dynamical system. Moreover
\begin{equation}\mathbf{\hat{\Phi}}(\tau,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega): \chi(\tau,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega)+(0,\kappa h(x)z(\theta_{\tau}\omega),0)^{\top}\mapsto\varphi(t,\omega)+(0,\kappa h(x)z(\theta_{\tau}\omega),0)^{\top}.~\label{3.12}\end{equation}
(20)
To show the conjugation of the solution for the stochastic partial differential Equation (17) and the random partial differential Equation (19), introducing the homeomorphism \(P(\theta_{t}\omega)(y,w,\zeta(s))^{\top}=(y,w-\varepsilon y+\kappa h(x) z(\theta_{t}\omega),\zeta(s) )^{\top}~,~(y,w,\zeta(s))^{\top}\in E(\Gamma) \) with an inverse homeomorphism \(P^{-1}(\theta_{t}\omega)(y,w,\zeta(s))^{\top}=(y,w+\varepsilon y-\kappa z(\theta_{t}\omega),\zeta(s) )^{\top}, \) then we have the transformation
\begin{equation}\mathbf{\hat{\Phi}}(\tau,t,\omega)=P(\theta_{t}\omega)\mathbf{\Phi}(t,\omega)P^{-1}(\theta_{t}\omega),E\mapsto E,t\geq\tau.~\label{3.13}\end{equation}
(21)
Consider the equivalent RDS and introduce the isomorphism and has the inverse isomorphism:
\begin{equation} \begin{cases} \mathbf{\check{\Phi}}(\tau,t,\omega)=T_{\varepsilon}\mathbf{\hat{\Phi}}(t,\omega)T_{-\varepsilon} :\chi_{\tau}\mapsto\varphi(t+\tau,\tau,\theta_{-\tau}\omega,\chi_{\tau}(\theta_{-\tau}\omega)),\\ \varphi'+H(\varphi)=\bar{Q}(\varphi,t,\omega),\\ \varphi(\tau,\omega)=\varphi_{\tau}= (u_{0}(x),y_{1}(x)-\varepsilon y_{0}(x),\eta_{0})^{\top}, \end{cases}\label{3.15}%3.14 \end{equation}
(22)
where \[\begin{aligned} &\ \varphi= \left(y,w,\eta\right)^{\top}=(y,y_{t}+\varepsilon y,\eta)^{\top},\\ &\ T_{\varepsilon}\varphi= \left(y,w,\eta\right)^{\top}=(y,w+\varepsilon y,\eta)^{\top},\\ &\ T_{-\varepsilon}\varphi= \left(y,w,\eta\right)^{\top}=(y,w-\varepsilon y,\eta)^{\top},\\ \end{aligned} \] \[H(\varphi)=\left(% \begin{array}{ccc} \varepsilon y -w \\ -\varepsilon w+\varepsilon^{2}y+Ay+\int_{0}^{\infty}\mu(s)A^{\frac{1}{2}} \eta(s)ds \\ \varepsilon y -w + \eta_{s} \end{array} \right), \] and \[\bar{Q}(\varphi,\omega,t)= \left(% \begin{array}{cc} 0 \\ (1+k(0)\int_{\Omega}|\nabla y|^{2}dx)\triangle y-a(x)g(y_{t})-f(y)+g(x,t)+\kappa h(x)z(\theta_{t}\omega)\\ 0 \\ \end{array} \right) \] is also a random dynamical systems corresponding to the Equation (17). Therefore, \(\mathbf{\Phi}, \mathbf{\hat{\Phi}} \) and \(\mathbf{\check{\Phi}} \) are equivalent to each other in dynamics.

4. Random absorbing set

In this section, we will show boundedness of the solutions for Equation (17). The existence of a pullback absorbing set \(\mathbf{\Phi}\in\mathcal{D} \) and the asymptotic compactness of the random dynamical system associated with the Equation (17). We always assume that \(\mathcal{D} \) is the collection of all tempered subsets of \(E(\Gamma) \) from now on.

Lemma 2. Let (2)-(4) and (7)-(8) hold. Then, for any \(\tau\in\mathbb{R},~\omega\in\Omega \) and \(\chi_{\tau-t}\in E(\Gamma) \), there exists a random ball \(\{K(\omega)\}_{\omega \in \Omega} \in \mathcal{D} \) centered at \(0 \) with random radius \(M(\omega)\geq 0 \) such that \(\{K(\omega)\} \) is a random absorbing set for \(\Phi \) in \(\mathcal{D} \), that is, for any B= \(\{B(\omega)\}_{\omega\in\Omega}\in \mathcal{D} \), P-almost surely, there exists a \(T=T(\tau ,\omega,B)>0 \) and \(\chi_{\tau-t}(\omega)\in B(\omega) \) such that

\begin{equation}\left\|\chi\left(r,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}\right)\right\|^{2}_ {E}\leq M_{0}^{2}(\omega),~\label{4.1}\end{equation}
(23)
where \(M_{0}(\omega) \) is a positive random function, that is
\begin{equation}\Phi(t,\tau,\theta_{-t}\omega,B(\tau,\theta_{-t}\omega))\subseteq K(\tau,\omega)~~for~ all~ t \geq T.~\label{4.2}\end{equation}
(24)

Proof. Taking the inner product of the first term of (23) with \(\chi=(u,v,\eta)\in E,~~v=\frac{du}{dt}+\varepsilon u-\kappa h(x)z(\theta_{t}\omega) \), we find that

\begin{equation} \left(\chi',\chi\right)+\left(H(\chi),\chi\right)=\left(\mathbf{F}(t,x,\chi),\chi\right).\label{4.3}\end{equation}
(25)
Using Hölder, Young and Poincarè inequalities and after simple computation, we gain
\begin{align} \left(H(\chi),\chi\right)&=\left(% \begin{array}{ccc} \varepsilon u -v \\ -\varepsilon v+\varepsilon^{2}u+Au+\int_{0}^{\infty}\mu(s)A^{\frac{1}{2}} \eta(s)ds \\ \varepsilon u -v + \eta_{s} \end{array} \right)\left(% \begin{array}{ccc} u\\ v\\ \eta \end{array} \right)\notag\\ &=\varepsilon\|\Delta u\|^{2}+\varepsilon^{2}(u,v)-\varepsilon\| v\|^{2}+(\varepsilon u+\eta_{s},\eta)\notag\end{align} \begin{align} &=\varepsilon\|\Delta u\|^{2}+\varepsilon^{2}(u,v)-\varepsilon\| v\|^{2}-\frac{\delta}{4}\|\nabla \eta\|_{\mu}^{2}-\frac{m_{0}\varepsilon^{2}}{2\lambda}\|\Delta u\|^{2}+\frac{\delta}{2}\|\nabla \eta\|_{\mu}^{2}\notag\\ &=\varepsilon\|\Delta u\|^{2}+\varepsilon^{2}(u,v)-\varepsilon\| v\|^{2}-\frac{m_{0}\varepsilon^{2}}{2}\|\nabla u\|^{2}+\frac{\delta}{4}\|\nabla \eta\|_{\mu}^{2}.\label{4.4}\end{align}
(26)
Using Cauchy-Schwartz inequality and Young inequality, we obtain
\begin{align}\left(\mathbf{F}(t,x,\chi),\chi\right)=\left(% \begin{array}{cc} k h(x)z(\theta_{t}\omega) \\ (1+k(0)\int_{\Omega}|\nabla y|^{2}dx)A^{\frac{1}{2}} u-a(x)g(u_{t})-f(u)+g(x,t)+\kappa h(x)z(\theta_{t}\omega)\\ k h(x)z(\theta_{t}\omega) \\ \end{array} \right)\left(% \begin{array}{ccc} u\\ v\\ \eta \end{array} \right).~\label{4.5}\end{align}
(27)
From (3) \(_{(\mathbf{A}_{2}),(\mathbf{A}_{4})} \), we obtain
\begin{align} -\left(1+k(0)\|\nabla u\|^{2})\triangle u,v\right) &=-\left(\left(1+k(0)\|\nabla u\|^{2}\right)\nabla,\nabla(\frac{du}{dt}+\varepsilon u-ah(x) z(\theta_{t}\omega)\right)\notag\\ &\leq -\left(1+k(0)\|\nabla u\|^{2}\right)\left(\frac{1}{2}\frac{d}{dt}\|\nabla u\|^{2}+\frac{\varepsilon}{2} \|\nabla u\|^{2}\right) +\frac{\kappa^{2}}{2\varepsilon}\|\nabla h(x))\|^{2} | z(\theta_{t}\omega)|^{2},~\label{4.6}\end{align}
(28)
and from (2) and (4), it is easy to show that
\begin{align} \left(a(x)g(u_{t}),v\right)&=\left(\alpha_{0}g(\vartheta)\left(v-\varepsilon u+\kappa h(x) z(\theta_{t}\omega)-g(0)\right),v\right)\notag\\ &\leq\alpha_{0}\alpha_{1}\|v\|^{2}+\alpha_{0}\left(-\alpha_{2}\varepsilon u+g'(\vartheta)\kappa h(x) z(\theta_{t}\omega,v\right),\label{4.7}\end{align}
(29)
where \(\vartheta \) is between \(0 \) and \(v-\varepsilon u+\kappa h(x) z(\theta_{t}\omega) \).
\begin{align} \left(q(x,t),v\right) =\|q(x,t)\|\|v\|\leq\frac{\|q(x,t)\|^{2}}{2(\alpha_{0}\alpha_{1}-\varepsilon)}+\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}\|v\|^{2} , \end{align}
(30)
\begin{align} ((k h(x)z(\theta_{t}\omega),u))\leq\left\|\Delta u\right\|\left\|\Delta h(x)\right\| \left| z(\theta_{t}\omega)\right|\leq \frac{\varepsilon}{4}\left\|\triangle u\right\|^{2}+\frac{k^{2}}{\varepsilon}\left\|\Delta h(x)\right\|^{2}\left|z(\theta_{t}\omega)\right|^{2}, \end{align}
(31)
\begin{align} (k h(x)z(\theta_{t}\omega),\eta)_{\mu}\leq \frac{m_{0}k^{2}}{\delta}\left\|\nabla h(x)\right\|^{2}\left| z(\theta_{t}\omega)\right|^{2}+\frac{\delta}{4}\|\nabla \eta\|_{\mu}^{2} , \end{align}
(32)
\begin{align} (\alpha_{0}g'(\vartheta)-2\varepsilon)\kappa h(x)z(\theta_{t}\omega),v)\leq\frac{2\alpha^{2}_{2} \kappa^{2}}{\alpha_{0}\alpha_{1}-\varepsilon}\| h(x)\|^{2}\left|z(\theta_{t}\omega)\right|^{2} +\frac{\alpha_{0}\alpha_{1}-\varepsilon}{8}\left\|v\right\|^{2}.\end{align}
(33)
By second term for right hand side of (26) and (29), we can get
\begin{equation}\varepsilon(\varepsilon-\alpha_{2}\alpha_{0})(u,v)\geq-\frac{\alpha_{0}\alpha_{2}\varepsilon}{\lambda}\|\nabla u\|\| v\|\geq\frac{2(\alpha^{2}_{0}\alpha^{2}_{2}\varepsilon^{2})}{(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\|\nabla u\|^{2}-\frac{\alpha_{0}\alpha_{1}-\varepsilon}{8}\| v\|^{2}.\label{4.12}\end{equation}
(34)
About the nonlinearity, by (4), Hölder inequality and the Sobolev embedding theorem, we estimate that
\begin{equation}(f(u),v)=(f(u),\frac{du}{dt}+\varepsilon u-\kappa h(x)z(\theta_{r-\tau}\omega))\geq\frac{d}{dt}F(u)+\varepsilon C_{3}(F(u)-|\Gamma|)+(f(u),\kappa h(x)z(\theta_{r-\tau}\omega)).~\label{4.13}\end{equation}
(35)
From (3) \(_{(\mathbf{A}_{3})-(\mathbf{A}_{5})} \), we have
\begin{align}&(f(u), \kappa h(x)z(\theta_{r-\tau}\omega))\leq C_{1}\int_{\Gamma}(1+|u|^{\gamma})\kappa h(x)z(\theta_{r-\tau}\omega)dx\notag\\ &\leq C_{1}\kappa\|h(x)\||z(\theta_{r-\tau}\omega)|+C_{1}\kappa\int_{\Gamma}(|u|^{\gamma+1})^{\frac{\gamma}{\gamma+1}} \|h(x)\|_{L^{\gamma+1}(U)}|z(\theta_{r-\tau}\omega)|^{\frac{\gamma}{\gamma+1}}\notag\\ &\leq C_{1}\kappa\|h(x)\||z(\theta_{r-\tau}\omega)|+C_{1}\kappa(\frac{1}{C_{2}}F(u)+\int_{\Gamma}dx) ^{\frac{\gamma}{\gamma+1}}\|h(x)\|_{L^{\gamma+1}(\Gamma)}|z(\theta_{r-\tau}\omega)|^{\frac{\gamma}{\gamma+1}}\notag\\ &\leq C_{1}\kappa\|h(x)\||z(\theta_{r-\tau}\omega)|+\frac{\kappa\varepsilon C_{1}}{2}|\Gamma|+\frac{\varepsilon C_{1}}{2C_{2}}F(u) +C^{\frac{\gamma}{\gamma+1}}_{1}\kappa^{\frac{\gamma}{\gamma+1}}\|h(x)\|^{\gamma+1}_{H^{1}_{0}(\Gamma)}|z(\theta_{r-\tau}\omega)|^{\gamma+1}.~\label{4.14}\end{align}
(36)
Inserting the above two inequalities together, it yields that
\begin{align}&(f(u),v)=\notag\\ &\frac{d}{dt}F(u)+ \frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2}(F(u)-|\Gamma|) +C_{1}\kappa\|h(x)\||z(\theta_{r-\tau}\omega)|+C^{\frac{\gamma}{\gamma+1}}_{1}\kappa^{\gamma+1} \|h(x)\|^{\gamma+1}_{H^{1}_{0}(\Gamma)}|z(\theta_{r-\tau}\omega)|^{\gamma+1},~\label{4.15}\end{align}
(37)
Collecting all inequalities (25)-(37), it leads to
\begin{align}&\displaystyle\frac{d}{dt}\left(\|v\|^{2}+\|\nabla u\|^{2}+(1+k(0)\|\nabla u\|^{2})\|\nabla u\|^{2}+\frac{\delta}{4}\|\nabla \eta\|_{\mu}^{2}-\frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2}\int_{\Gamma}F(u)dx\right)\notag\\ &\;\;\;+\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}\|v\|^{2} +2\varepsilon(1+k(0)\|\nabla u\|^{2})\|\nabla u\|^{2}+2\varepsilon^{2}(m-\frac{\alpha^{2}_{0}\alpha^{2}_{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}})\|\nabla u\|^{2}+\frac{\delta}{4}\|\nabla \eta\|_{\mu}^{2}\notag\\ &\leq \frac{\|q(x,t)\|^{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)} +C_{1}\kappa\|h(x)\||z(\theta_{r-\tau}\omega)|+C^{\frac{\gamma}{\gamma+1}}_{1}\kappa^{\gamma+1} \|h(x)\|^{\gamma+1}_{H^{1}_{0}(\Gamma)}|z(\theta_{r-\tau}\omega)|^{\gamma+1}\notag\\ &\;\;\;+\varepsilon k^{2}\left\|\Delta h(x)\right\|^{2}\left|z(\theta_{r-\tau}\omega)\right|^{2} +\frac{2\alpha^{2}_{2} \kappa^{2}}{\alpha_{0}\alpha_{1}-\varepsilon}\| h(x)\|^{2}\left|z(\theta_{r-\tau}\omega)\right|^{2}+ \frac{m_{0}k^{2}}{\delta}\left\|\nabla h(x)\right\|^{2}\left|z(\theta_{r-\tau}\omega)\right|^{2}\notag\\ &\;\;\;+\frac{\kappa^{2}}{2\varepsilon}\|\nabla h(x))\|^{2} | z(\theta_{r-\tau}\omega)|^{2}+\frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2}|\Gamma| .\label{4.16}\end{align}
(38)
Thus
\begin{equation}\begin{array}{ll} \|\varphi\|_{E(\Gamma)}^{2} =\|v\|^{2}+\|\nabla u\|^{2}+(1+k(0)\|\nabla u\|^{2})\|\nabla u\|^{2}+\frac{\delta}{4}\|\nabla \eta\|_{\mu}^{2}-\frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2}\int_{\Gamma}F(u)dx ,\end{array}~\label{4.17}\end{equation}
(39)
and
\begin{align} \displaystyle\varrho\left(\theta_{r-\tau}\omega\right)&= \frac{\|q(x,t)\|^{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)} +C_{1}\kappa\|h(x)\||z(\theta_{r-\tau}\omega)|+C^{\frac{\gamma}{\gamma+1}}_{1}\kappa^{\gamma+1} \|h(x)\|^{\gamma+1}_{H^{1}_{0}(\Gamma)}|z(\theta_{r-\tau}\omega)|^{\gamma+1}\notag\\ &\;\;\; +\varepsilon k^{2}\left\|\Delta h(x)\right\|^{2}\left|z(\theta_{r-\tau}\omega)\right|^{2} +\frac{2\alpha^{2}_{2} \kappa^{2}}{\alpha_{0}\alpha_{1}-\varepsilon}\| h(x)\|^{2}\left|z(\theta_{r-\tau}\omega)\right|^{2}+ \frac{m_{0}k^{2}}{\delta}\left\|\nabla h(x)\right\|^{2}\left|z(\theta_{r-\tau}\omega)\right|^{2}\notag\\ &\;\;\;+\frac{\kappa^{2}}{2\varepsilon}\|\nabla h(x))\|^{2} | z(\theta_{r-\tau}\omega)|^{2}+\frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2}|\Gamma| .\label{4.18}\end{align}
(40)
Since \(\varepsilon\in(0,1) \) be small enough such that \(\varepsilon^{2}\left(m-\frac{\alpha^{2}_{0}\alpha^{2}_{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\right)>0,\;\;~ ~\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}> 0,\;\;\frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2}>0, \) we will choose \(\sigma=\left(\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}, 2\varepsilon^{2}(m-\frac{\alpha^{2}_{0}\alpha^{2}_{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}})\right) \) and \(\tilde{\sigma}=\min\{\sigma,\frac{\varepsilon\left(2C_{3}- C_{1}C^{-1}_{2}\right)}{2},\frac{\delta}{4}\} \), which gives
\begin{equation} ~~~\displaystyle\frac{d}{dt}\left\|\varphi(r)\right\|_{E}^{2} +\tilde{\sigma}\|\varphi(r)\|_{E}^{2} \leq \varrho(\theta_{r-\tau}\omega) .\label{4.19}\end{equation}
(41)
Applying Gronwall's Lemma over \([\tau-t~,r] \), we find that for \(r\geq \tau-t \),
\begin{equation} \displaystyle\left\|\varphi(r,\tau-t,\omega,\varphi_{\tau-t}(\omega))\right\|_{E}^{2} \leq e^{-\tilde{\sigma}-t)}\|\varphi_{\tau-t}\|_{E}^{2}+\int_{\tau-t}^{\tau}\varrho(\theta_{\varsigma-\tau} \omega)e^{-\tilde{\sigma}(t-\varsigma)}d\varsigma .\label{4.20}\end{equation}
(42)
By replacing \( \omega ~by~ \theta_{-t}\omega \), we get from (42) such that for all \(t\geq0 \)
\begin{align}&\displaystyle\left\|\varphi(r,\tau-t,\theta_{-\tau}\omega,\varphi_{\tau-t}(\theta_{-\tau}\omega))\right \|_{E}^{2}\leq\left\|\chi(r,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}(\theta_{-\tau}\omega))\right \|_{E}^{2}\notag\\ & \leq e^{-\tilde{\sigma} t}\|\chi(\tau-t,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}(\theta_{-\tau}\omega))\|_{E}^{2} +\int_{\tau-t}^{\tau}\varrho(\theta_{\varsigma-\tau}\omega)e^{\tilde{\sigma}(\tau-\varsigma)}d\varsigma .\label{4.21}\end{align}
(43)
Since \(z(\theta_{t}\omega) \) is a tempered random variable and \(\lim_{t\rightarrow{\pm\infty}}\frac{z(\theta_{t}\omega)}{t}=0 \), \(\int_{\pm\infty}^{0}\frac{1}{t}z(\theta_{r}\omega)dr=0 \). Thus, there exists \(M_{0}(\omega) \) and \(T=T(\tau ,\omega,B)>0 \) such that \[\begin{array}{ll} \displaystyle ~~\limsup_{t\rightarrow{-\infty}} e^{-\tilde{\sigma} t}\|\chi(\tau-t,\theta_{-\tau}\omega)\|_{E}^{2}=0,\\ \int_{-\infty}^{0}\varrho(\theta_{\varsigma-\tau}\omega)e^{\tilde{\sigma}((\tau-\varsigma))}d\varsigma< +\infty=M^{2}_{0}(\omega),\\ \end{array}~ \]
\begin{equation} ~~\displaystyle \|\chi(r,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}(\theta_{-\tau}\omega))\|_{E}^{2}\leq M^{2}_{0}(\omega).\label{4.22}\end{equation}
(44)
The proof is completed.

Now we decompose the Equation (6) into two parts and also decompose the nonlinear growth term \(f\in C^{1} \) in Equation (3) into two parts \(f=f_{1}+f_{2} \), where \(f_{1},f_{2} \) satisfy the following respectively

\begin{equation}\begin{cases} &\ (\mathbf{A}_{1}):~~uf_{i}(u)\geq 0,\\ &\ (\mathbf{A}_{2}):~~|f'_{1}(u)| \leq \mu_{1}(1+|u|^{\frac{4}{n-2}}),\forall~u\in~\mathbb{R},~n\geq3\\ &\ (\mathbf{A}_{3}):~~|f_{2}(u)|\leq \mu_{2}(1+|u|^{\gamma})~,\forall~u\in\mathbb{R},\\ &\ (\mathbf{A}_{4}):~~F_{i}(u)=\int_{0}^{u}f_{i}(r)dr,\\ &\ (\mathbf{A}_{5}):~~uf_{i}(u)\geq \mu_{i}(F(u)-1),\\ &\ (\mathbf{A}_{6}):~~k_{0}(|u|^{\gamma+1}-1)\leq F_{i}(u)\leq k_{1}uf_{i}(u)+C_{\mu}~. \end{cases}\label{4.23}\end{equation}
(45)
where \(\mu_{i}, C_{\mu},k_{0},k_{1},i=1,2 \) are positive constants. Let for any \(\tau\in\mathbb{R},\omega\in\Omega \), there is a time \(T_{1}=T_{1}(B_{0},\omega) \) satisfies
\begin{equation}\displaystyle \hat{B}(\omega)=\sqcup_{t\geq \hat{T}} \chi\left(\tau,\tau-t,\theta_{-\tau}\omega;\chi_{\tau-t}(\omega)\right)=\chi_{\tau-t}(\omega)\in \hat{B}(\tau,\theta_{-\tau}\omega))\subseteq B_{0}(\omega),~\forall~t\geq \hat{T},\label{4.24}\end{equation}
(46)
for any \(\omega\in\Omega \), where \(\hat{T}=\hat{T}(B_{0},\omega)\geq\tau \) is the pullback absorbing time in Lemma 2, then it holds \(\hat{B}(\omega)\subseteq B_{0}(\omega) \) that
\begin{equation} \displaystyle\Phi(t,\tau,\theta_{-t}\omega;\hat{B}(\tau,\theta_{-t}\omega)) =\chi(r,\tau-t,\theta_{-\tau}\omega;\hat{B}(\tau,\theta_{-\tau}\omega))\subseteq \hat{B}(\tau,\theta_{-\tau}\omega))\subseteq B_{0}(\omega),~\forall~t\geq \check{T}.\label{4.25}\end{equation}
(47)
In order to obtain the regularity estimates, we decompose the solution \(\chi(t,\tau,\omega)=(u(t,\tau,\omega),v(t,\tau,\omega),\eta^{t}(t,\tau,s,\omega))^{\top} \) of system (6) with initial data \(\chi(\tau,\omega)=(u_{0},v_{0},\eta_{0}^{t})^{\top} \) into two parts
\begin{equation}\left\{\begin{aligned} &\ \chi(t,\tau,\omega)=\hat{\chi}(t,\tau,\omega)+\check{\chi}(t,\tau,\omega),\\ &\ u=y+w,\\ &\ \eta^{t}=\hat{\eta}^{t}+\check{\eta}^{t}. \end{aligned}\right .\label{4.26}\end{equation}
(48)
Then, we can rewrite the Equation (6) into the following systems
\begin{equation} \left\{\begin{aligned} &\ y_{tt}+\Delta^{2}y-k(0)(1+\int_{\Omega}|\nabla y|^{2}dx)\Delta y-\int_{0}^{\infty}\mu \Delta \hat{\eta} u(t-s)ds+a(x)g(y_{t})+f_{1}(y)=\hat{q}(x,t),\\ &\ \hat{\eta}_{t}=-\eta_{1s}+y_{t},\\ &\ y(\tau,x)=y_{0}(x),~y_{t}(\tau,x)=y_{1}(x),~ x\in \Gamma,~\tau\in~\mathbb{R},\\ &\ \hat{\eta}_{\tau}(x,\tau,s)=y_{0}(x)-y_{0}(x,-s),~ x\in \Gamma,~\tau\in~\mathbb{R},s\in~\mathbb{R}^{+}, \end{aligned}\right.~\label{4.27} \end{equation}
(49)
Let \(\hat{\chi}(t,\omega)=(\hat{y} ,\check{y},\hat{\eta}^{t}(t,s))^{\top},~~\hat{y}=y \) and \(\check{y}=\hat{y}_{t}+\varepsilon \hat{y} \), which are equivalent with
\begin{equation} \left\{\begin{aligned} &\ \hat{\chi}'+H(\hat{\chi})=\hat{F}(\hat{\chi},t,\omega),\\ &\ \hat{\chi}(\tau,\omega)= (\hat{y}_{0}(x),\hat{y}_{1}(x)+ \varepsilon \hat{y}_{0}(x),\hat{\eta}_{0})^{\top}~,~\hat{\chi}=(\hat{y},\check{y},\hat{\eta})^{\top}, \end{aligned}\right.\label{4.28} \end{equation}
(50)
where \[H(\hat{\chi})=\left(% \begin{array}{ccc} \varepsilon \hat{y} -\check{y} \\ -\varepsilon \check{y}+\varepsilon^{2}\hat{y}+A\hat{y}+\int_{0}^{\infty}\mu(s)A^{\frac{1}{2}} \hat{\eta}(s)ds \\ \varepsilon \hat{y} -\check{y} + \hat{\eta}_{s} \\ \end{array} \right), \;\;\;\;\; \] \[\hat{F}(\hat{\chi},\omega,t)= \left(% \begin{array}{cc} 0 \\ (1+k(0)\int_{\Omega}|\nabla \hat{y}|^{2}dx)A^{\frac{1}{2}} y_{1}-a(x)g(\hat{y}_{t})-f_{1}(\hat{y})+\hat{q}(x,t)\\ 0 \\ \end{array} \right), \] and
\begin{equation} \left\{\begin{aligned} &\ w_{tt}+\Delta^{2}w-k(0)(1+\int_{\Omega}|\nabla w|^{2}dx)\Delta w-\int_{0}^{\infty}\mu \Delta \check{\eta } ds+a(x)g(w_{t})+f(u)-f_{1}(y) =\check{q}(x,t)+\kappa \sum_{j=1}^{m}h_{j}\dot{W}_{j},\\ &\ \check{\eta}_{t}=-\check{\eta}_{s}+w_{t},\\ &\ w(\tau,x)=w_{0}(\tau,x),~w_{t}(\tau,x)=w_{1}(\tau,x),~ x\in\Gamma,~\tau\in~\mathbb{R},\\ &\ \check{\eta}_{\tau}(x,\tau,s)=w_{0}(x,\tau)-w_{0}(x,\tau-s),~ x\in\Gamma,~\tau\in~\mathbb{R},s\in~\mathbb{R}^{+}, \end{aligned}\right.~\label{4.29} \end{equation}
(51)
Since \(\check{\chi}=\left(\hat{w}, \check{w},\check{\eta}^{t}\right)^{\top} \), \(\left(\hat{w},\check{w}=\hat{w}_{t}+\delta \hat{w}-\kappa z(\theta_{t}\omega),\check{\eta}^{t}\right)^{\top} \),
\begin{equation}\check{w}=\hat{w}_{t}+\delta \hat{w}-\kappa z(\theta_{t}\omega).~\label{4.30}\end{equation}
(52)
The above equations leads to
\begin{equation} \left\{\begin{aligned} &\ \check{\chi}'+H(\check{\chi})=\check{F}(\check{\chi},t,\omega),\\ &\ \check{\chi}(\tau,\omega)= (\hat{w}_{0}(x),\hat{w}_{1}(x)+ \varepsilon \hat{w}_{0}(x)-\kappa z(\theta_{t}\omega),\check{\eta}_{0})^{\top}~,~\check{\chi}=(\hat{w},\check{w},\check{\eta})^{\top}, \end{aligned}\right.\label{4.31}\end{equation}
(53)
in which \[H(\check{\chi})=\left(% \begin{array}{ccc} \varepsilon \hat{w} -\check{w} \\ -\varepsilon \check{w}+\varepsilon^{2}\hat{w}+A\hat{w}+\int_{0}^{\infty}\mu(s)A^{\frac{1}{2}} \check{\eta}(s)ds \\ \varepsilon \hat{w} -\check{w} + \check{\eta}_{s} \\ \end{array} \right), \] and \[\check{F}(\check{\chi},\omega,t)= \left(% \begin{array}{cc} \kappa z(\theta_{t}\omega) \\ (1+k(0)\int_{\Omega}|\nabla \hat{w}|^{2}dx)A^{\frac{1}{2}} \hat{w}-a(x)g(\hat{w}_{t})-f(u)+f_{1}(\hat{y})+\hat{q}(x,t)+\kappa z(\theta_{t}\omega)\\ \kappa z(\theta_{t}\omega) \\ \end{array} \right). \] Now we need to establish some priori estimates for the solutions of Equation (50) and Equation (53), which are the basis of our later analysis.

Lemma 3. Let (2)-(5),(7)-(8) and (45) hold. Let \(\hat{B}(\tau,\omega)\subseteq B_{0}(\tau,\omega) \), \(\hat{B}=\{\hat{B}(\tau,\omega)\}_{\omega\in\Omega}\in D(E) \) and \(\hat{\chi}_{0}(\omega)\in \hat{B}(\tau,\omega) \). Then there exists \( \hat{T}=\hat{T}(\hat{B},\omega)>0 \) and \(M_{0}(\omega) \), such that the solution \(\hat{\chi}(\hat{T},\omega,\hat{\chi}_{\tau}(\omega)) \) of (50) satisfies for P-a.e \(\omega\in\Omega ,~\forall~t\geq \hat{T} \)

\begin{equation}\|\hat{\chi}(r,\tau-t,\omega,\hat{\chi}_{\tau-t}(\omega))\|_{E}^{2}\leq\|\hat{\chi}_{\tau-t}\|^{2}e^{-2\sigma t}+\int_{\tau}^{r}\hat{r}(\omega)dr\leq \hat{M}(\omega). ~\label{4.32}\end{equation}
(54)

Proof. Taking inner product of (50) with \(\hat{\chi} \) in \(E \), we have

\begin{equation}\left(\hat{\chi}',\hat{\chi}\right)+\left(H(\hat{\chi}),\hat{\chi}\right)=\left(\hat{F}(t,x,\hat{\chi}) .\hat{\chi}\right).\label{4.33}\end{equation}
(55)
Using Hölder, Young and Poincarè inequalities, we get
\begin{equation}\left(H(\hat{\chi}),\hat{\chi}\right)=\varepsilon\|\Delta \hat{y}\|^{2}+\varepsilon^{2}(\hat{y},\check{y})-\varepsilon\| \check{y}\|^{2}-\frac{m_{0}\varepsilon^{2}}{2}\|\nabla \hat{y}\|^{2}+\frac{\delta}{4}\|\nabla \hat{\eta}\|_{\mu}^{2}.~\label{4.34}\end{equation}
(56)
Now, we estimate the terms on the right hand side of (55) one by one:
\begin{align} \displaystyle \left(\left(1+k(0)\|\nabla\hat{y}\|^{2}\right)\triangle\hat{y},\check{y}\right) &= \left(\left(1+k(0)\|\nabla\hat{y}\|^{2}\right)\nabla\hat{y}, \nabla(\frac{d\hat{y}}{dt}+\varepsilon \hat{y})\right)\notag\\&\leq \left(1+k(0)\|\nabla \hat{y}\|^{2}\right)\left(\frac{1}{2}\frac{d}{dt}\|\nabla \hat{y}\|^{2}+\frac{\varepsilon}{2} \|\nabla \hat{y}\|^{2}\right),\label{4.35}\end{align}
(57)
and from (2), it is easy to show that
\begin{equation}\displaystyle \left(a(x)g(\hat{y}_{t}),\check{y}\right) =\left(\alpha_{0}g(\vartheta)\left(\check{y}-\varepsilon \hat{y},\check{y}\right)\right)\\ \leq\alpha_{0}\alpha_{1}\|\check{y}\|^{2}-\alpha_{0}\alpha_{2}\varepsilon \left(\hat{y},\check{y}\right),\label{4.36}\end{equation}
(58)
where \(\vartheta \) is between \(0 \) and \(\check{y}-\varepsilon \hat{y} \).
\begin{equation}\displaystyle\left(\hat{q}(x,t),v\right) =\|\hat{q}(x,t)\|\|\check{y}\|\leq\frac{\|\hat{q}(x,t)\|^{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)}+\frac{\alpha_{0} \alpha_{1}-\varepsilon}{4}\|\check{y}\|^{2} ,\label{4.37}\end{equation}
(59)
by second term for right hand side of (55) and (59), we can get
\begin{equation}\varepsilon(\varepsilon-\alpha_{2}\alpha_{0})(\hat{y},\check{y})\geq-\frac{\alpha_{0}\alpha_{2}\varepsilon}{\lambda}\|\nabla \hat{y}\|\| \check{y}\|\geq\frac{\alpha_{0}\alpha_{1}-\varepsilon}{4}\| \check{y}\|^{2}-\frac{\alpha^{2}_{0}\alpha^{2}_{2}\varepsilon^{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\|\nabla \hat{y}\|^{2},\label{4.38}\end{equation}
(60)
Further, from (45) \(_{(\mathbf{A}_{2}),(\mathbf{A}_{4})} \), we infer
\begin{equation}(f_{1}(\hat{y}),\check{y})=(f_{1}(\hat{y}),\frac{d\hat{y}}{dt}+\varepsilon \hat{y}) \geq\frac{d}{dt}F_{1}(\hat{y})+\frac{\varepsilon}{k_{1}}(F_{1}(\hat{y})-c_{\mu}|\Gamma|)),~\label{4.39}\end{equation}
(61)
Thus, applying together in (55) we conclude that
\begin{align} &\displaystyle\frac{1}{2}\frac{d}{dr}\left(\| \check{y}\|^{2}+\left(1+k(0)\|\nabla\hat{y}\|^{2}\right)\|\nabla \hat{y}\|^{2}+\|\nabla \hat{y}(r)\|^{2}+\| \nabla\hat{\eta}\|_{\mu}^{2}+\tilde{F_{1}}(\hat{y})\right)\notag\\ &\;\;\;+\frac{\varepsilon}{2}\left(\|\check{y}\|^{2}+\left(1+k(0)\|\nabla\hat{y}\|^{2}\right)\|\nabla \hat{y}\|^{2}\right)+\varepsilon\left(\frac{1}{\lambda}-\frac{m\varepsilon}{2}- \frac{\varepsilon\alpha^{2}_{0}\alpha^{2}_{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\right)\|\nabla \hat{y}(r)\|^{2}\notag\\ &\;\;\;+\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}\| \check{y}\|^{2}+\frac{\delta}{4}\|\hat{\eta}(r)\|_{\mu,1}^{2}+\frac{\varepsilon}{k_{1}}\hat{F_{1}}(y(r))\notag\\ &\leq \frac{\|q(x,t)\|^{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)}+\frac{\varepsilon}{k_{1}}c_{\mu}|\Gamma|, \label{4.40}\end{align}
(62)
Since the inequalities above has nonnegative terms, we obtain \(\varepsilon\left(\frac{1}{\lambda}-\frac{m\varepsilon}{2}-\frac{\varepsilon\alpha^{2}_{0}\alpha^{2}_{2}} {(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\right)>0,~ ~\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}> 0. \) We will choose \(\sigma=\left(\frac{\varepsilon}{2},\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}, 2\varepsilon\left(\frac{1}{\lambda}-\frac{m\varepsilon}{2}-\frac{\varepsilon\alpha^{2}_{0}\alpha^{2}_{2}} {(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\right)\right) \) and \(\hat{\sigma}=\min\{\sigma,\frac{\varepsilon}{k_{1}},\frac{\delta}{4}\} \), which obviously implies that
\begin{equation}\displaystyle\frac{d}{dt}\left\|\hat{\chi}(r)\right\|_{E}^{2} +\hat{\sigma}\|\hat{\chi}(r)\|_{E}^{2} \leq \frac{\|\hat{q}(x,t)\|^{2}}{(\alpha_{0}\alpha_{1}-\varepsilon)}+\frac{\varepsilon}{k_{1}}c_{\mu}|\Gamma| .\label{4.41}\end{equation}
(63)
Note that \(\hat{\chi}(r,\tau-t,\omega,\hat{\chi}_{\tau-t}(\omega))=\chi(r,\tau-t,\omega,\chi_{\tau-t}(\omega))-(0,z(\theta_{t}\omega),0)\in B_{0}(\tau,\omega) \). By definition of \(B_{0}(\tau,\omega) \), it follows that \(\|\hat{\chi}(r,\tau-t,\omega,\hat{\chi}_{\tau-t}(\omega))\|_{E}^{2}\leq \hat{r}(\omega)+|z(\theta_{r-\tau}\omega)|= \hat{M}(\omega). \) Now, by the Gronwall inequality to \([\tau-t,r] \), we arrive to (54); \(\|\hat{\chi}(r,\tau-t,\omega,\hat{\chi}_{\tau-t}(\omega))\|_{E}^{2}\leq \hat{M}(\omega). \) Hence, for every \(\hat{y}\in H^{1}_{0} \),by \(H^{1}_{0}\subset L^{\frac{2n}{n-2}} \) and (62), we have \[ 0\leq|\int_{\Omega}f_{1}(u)|dx \leq \mu_{1}\left(\left\|\hat{y}\right|^{2}+\left\|\hat{y}\right\|_{L^{\frac{2n}{n-2}}}^{\frac{2n}{n-2}}\right)\leq \hat{r}(\omega)\left\|\nabla\hat{y}\right|^{2}.\forall~u\in~\mathbb{R},~n\geq3. \] The proof is completed.

Lemma 4. Suppose (2)-(4) hold. Let \(B_{1}(\omega)\subseteq B_{0}(\omega) \), \(\hat{B}=\{\hat{B}(\omega)\}_{\omega\in\Omega}\in \mathcal{D}(E) \) and \(\check{\chi}_{\tau}(\omega)\in \hat{B}(\omega) \), then there exists \(\check{T}=\check{T}(\hat{B},\omega,)>0 \) and a random radius \(\check{M}(\omega) \), such that the solution \(\check{\chi}(t,\tau,\omega,\check{\chi}_{\tau}(\omega)) \) of (53) satisfies for P-a.e \(\omega\in\Omega ,~\forall~t\geq \check{T} \)

\begin{equation}\displaystyle\left\|A^{\nu}\check{\chi}\left(r,\tau-t,\theta_{-t}\omega,\check{\chi}_{\tau-t}(\theta_{-t}\omega)\right) \right\|_{E}^{2} \leq\|A^{\nu}\check{\chi}_{\tau-t}(\theta_{-t}\omega)\|^{2}e^{-2\sigma(\tau-t)}+ \check{r}(\omega) \leq~\check{M}(\omega),~~ t\geq\tau. \label{4.42}\end{equation}
(64)
We denote
\begin{equation}\nu\in\left(0,\min\{\frac{1}{4},\frac{n+2-(n-2)\gamma}{4}\}\right) ~,~\forall~1\leq\gamma\leq\frac{n+2}{n-2}.\label{4.43}\end{equation}
(65)

Proof. By (64), (23) and \(\check{\chi}=\chi-\hat{\chi} \), there exists a random variable \(r(\omega)>0 \) such that \[\max\{\|\chi(0,\omega,\chi(0,\omega))\|_{E},\|\check{\chi}((0,\omega,\check{\chi}(0,\omega)))\|_{E}\}\leq r(\omega). \] By the embedding relations, we have \(\mathbf{V}_{\nu_{1}}\subset\mathbf{V}_{\nu_{2}}, \) if \(\nu_{1}\geq\nu_{2} \) and \(\mathbf{V}_{\nu}\subset\mathbb{L}^{q} \), where \(\frac{1}{q}=\frac{1}{2}-\frac{\nu}{n},~~\frac{1}{\grave{q}}=\frac{1}{2}-\frac{1}{q} \) and \(H_{0}^{\nu}=D(A^{\frac{\nu}{2}})=\mathbf{V}_{\nu}\subset\mathbb{L}^{q}\subset\mathbb{L}^{\grave{q}} \subset\mathbf{V}_{-\nu}=D(A^{\frac{-\nu}{2}}) \). Multiplying (53) with \(A^{\nu}\check{\chi}(r) \) and integrating over \(\Gamma \), we can get

\begin{align}\left(H(\check{\chi}(r)),A^{\nu}\check{\chi}(r)\right)&=\left(% \begin{array}{ccc} \varepsilon u -v \\ -\varepsilon v+\varepsilon^{2}u+Au+\int_{0}^{\infty}\mu(s)A^{\frac{1}{2}} \eta(s)ds \\ \varepsilon u -v + \eta_{s} \end{array} \right)\left(% \begin{array}{ccc} A^{\nu}u\\ A^{\nu}v\\ A^{\nu}\eta \end{array} \right)\notag\\ &=\varepsilon\|A^{\frac{\nu+1}{2}} \hat{w}\|^{2}+\varepsilon^{2}(\hat{w},A^{\nu}\check{w})-\varepsilon\| A^{\frac{\nu}{2}} \check{w}\|^{2}-\frac{m_{0}\varepsilon^{2}}{2}\|A^{\frac{1+2\nu}{4}} \hat{w}\|^{2}+\frac{\delta}{4}\|A^{\frac{1+2\nu}{4}} \check{\eta}\|_{\mu}^{2}.\label{4.44}\end{align}
(66)
Now using Cauchy-Schwartz inequality and Young inequality one by one as
\begin{align}&\displaystyle\left(\mathbf{F}(t,x,\chi),A^{\nu} \chi\right)=\notag\\ &\left(% \begin{array}{cc} k h(x)z(\theta_{t}\omega) \\ (1+k(0)\|A^{\frac{1}{4}} \hat{w}\|^{2})A^{\frac{1}{2}} \hat{w}-a(x)g(\hat{w}_{t})+f_{1}(\hat{y})-f(u)+\check{q}(x,t)+\kappa h(x)z(\theta_{t}\omega)\\ k h(x)z(\theta_{t}\omega)\\ \end{array} \right) \left(% \begin{array}{ccc} A^{\nu}\hat{w}\\ A^{\nu}\check{w}\\ A^{\nu}\check{\eta} \end{array} \right).\label{4.45}\end{align}
(67)
From (3) \(_{(\mathbf{A}_{2}),(\mathbf{A}_{4})} \), one has that
\begin{align}&-\left(1+k(0)\|A^{\frac{1}{4}} \hat{w}\|^{2})A^{\frac{1}{2}}\hat{w},A^{\nu}\check{w}\right)=\left(\left(1+k(0)\|A^{\frac{1}{4}} \hat{w}\|^{2}\right)A^{\frac{\nu}{2}}\hat{w},A^{\frac{\nu}{2}}(\frac{d\hat{w}}{dt}+\varepsilon \hat{w}-ah(x) z(\theta_{t}\omega)\right)\notag\\ &\leq -\left(1+k(0)\|A^{\frac{1}{4}} \hat{w}\|^{2}\right)\left(\frac{1}{2}\frac{d}{dt}\|A^{\frac{\nu}{2}} \hat{w}\|^{2}+\frac{\varepsilon}{2} \|A^{\frac{\nu}{2}} \hat{w}\|^{2}\right) +\frac{\kappa^{2}}{2\varepsilon}\|A^{\frac{\nu}{2}} h(x))\|^{2} |z(\theta_{t}\omega)|^{2}.\label{4.46}\end{align}
(68)
Therefore from (2) and (4), it is straightforward to show that
\begin{align}&\displaystyle \left(a(x)g(\hat{w}_{t}),A^{\nu}\check{w}\right)=-\left(\alpha_{0}g(\vartheta)A^{\frac{\nu}{2}}\left(\check{w}-\varepsilon \hat{w}+\kappa h(x) z(\theta_{t}\omega)-g(0)\right),A^{\frac{\nu}{2}}\check{w}\right)\notag\\ &\leq-\alpha_{0}\alpha_{1}\|A^{\frac{\nu}{2}}\check{w}\|^{2}+\alpha_{0}\alpha_{2}\varepsilon\left( A^{\frac{\nu}{2}}\hat{w},A^{\frac{\nu}{2}}\check{w}\right)-\alpha_{0}g'(\vartheta)\kappa \left(h(x) z(\theta_{t}\omega,A^{\frac{\nu}{2}}\check{w}\right),\label{4.47}\end{align}
(69)
where \(\vartheta \) is between \(0 \) and \(\check{w}-\varepsilon \hat{w}+\kappa h(x) z(\theta_{t}\omega) \).
\begin{align}&\displaystyle\left(\check{q}(x,t),A^{\nu}\check{w}\right) =\|A^{\frac{\nu}{2}}\check{q}(x,t)\|\|A^{\frac{\nu}{2}}\check{w}\| \leq\frac{\|A^{\frac{\nu}{2}}\check{q}(x,t)\|^{2}}{2(\alpha_{0} \alpha_{1}-\varepsilon)}+\frac{\alpha_{0}\alpha_{1}-\varepsilon}{2}\|A^{\frac{\nu}{2}}\check{w}\|^{2} ,\label{4.48}\\ \end{align}
(70)
\begin{align} &\displaystyle((k\varepsilon h(x)z(\theta_{t}\omega),A^{\nu}\hat{w})) \leq \frac{\varepsilon}{4}\left\|A^{\frac{2\nu+1}{2}} \hat{w}\right\|^{2}+\varepsilon k^{2}\left\|A^{\frac{2\nu+1}{2}} h(x)\right\|^{2}\left| z(\theta_{t}\omega)\right|^{2},\label{4.49}\\ \end{align}
(71)
\begin{align} &\displaystyle(k\varepsilon h(x)z(\theta_{t}\omega),A^{\nu}\check{\eta})_{\mu} \leq \frac{2m_{0}k^{2}}{\delta}\left\|A^{\frac{2\nu+1}{4}} h(x)\right\|^{2}\left| z(\theta_{t}\omega)\right|^{2}+\frac{\delta}{8}\|A^{\frac{1+2\nu}{4}} \check{\eta}\|_{\mu}^{2} ,\label{4.50}\\ \end{align}
(72)
\begin{align} &\displaystyle-(\alpha_{0}g'(\vartheta)-2\varepsilon)\kappa h(x)z(\theta_{t}\omega),A^{\nu}\check{w}) \leq\frac{2(\alpha_{0}\alpha_{2} \kappa)^{2}}{\alpha_{0}\alpha_{1}-\varepsilon}\| A^{\frac{\nu}{2}}h(x)\|^{2}\left|z(\theta_{t}\omega)\right|^{2} +\frac{\alpha_{0}\alpha_{1}-\varepsilon}{4}\left\|A^{\frac{\nu}{2}}\check{w}\right\|^{2}.\label{4.51}\end{align}
(73)
Through 2d term for right hand side of (26) and (29), we will get
\begin{align}&\varepsilon(\varepsilon-\alpha_{2}\alpha_{0})(\hat{w},A^{\nu}\check{w})\geq-\alpha_{0}\alpha_{2}\varepsilon\| A^{\frac{\nu}{2}}\hat{w}\|\| A^{\frac{\nu}{2}}\check{w}\|\geq\frac{\alpha_{0}\alpha_{1}-\varepsilon}{4}\| A^{\frac{\nu}{2}}\check{w}\|^{2}-\frac{(\alpha_{0}\alpha_{2}\varepsilon)^{2}} {(\alpha_{0}\alpha_{1}-\varepsilon)\lambda^{2}}\| A^{\frac{1+2\nu}{4}}\hat{w}\|^{2}.\label{4.52}\end{align}
(74)
For the nonlinearity, with the aid of (4), Hölder inequality and the Sobolev embedding theorem, we estimate that \begin{align*}&\displaystyle\left(f(u)-f_{1}(\hat{y}),A^{\nu}\check{w}\right)=\left(f(u)-f_{1}(\hat{y}), A^{\nu}(\hat{w}_{t}+\varepsilon \hat{w}-\kappa h(x) z(\theta_{t}\omega))\right)\\ &\leq\frac{d}{dt}\int_{\Gamma}(f(u)-f_{1}(\hat{y}))A^{\nu}\hat{w} dx+\varepsilon\int_{\Gamma}(f(u)-f_{1}(\hat{w}))A^{\nu}\hat{w} dx\\ &-\int_{\Gamma}(f'(u)u_{t}-f'_{1}(\hat{w})\hat{w}_{t})A^{\nu}\hat{w} dx-\kappa\int_{\Gamma}(f(u)-f_{1}(\hat{w}))\left|A^{\nu} h(x)||z(\theta_{t}\omega)\right|dx. \end{align*} Infer to \(\mathbf{A}_{4} \), (45)-(46), use Cauchy-Schwartz, Young's inequality and embedding theorem \(\mathbf{V}_{1+\nu}\subset L^\frac{2n}{n-2(1-\nu)} \), \(\mathbf{V}_{1-\nu}\subset L^\frac{2n}{n+2(1-\nu)} \) and \(\mathbf{V}_{1}\hookrightarrow L^{^\frac{2n}{n-2}} \), we gain
\begin{align} &\displaystyle\int_{\Gamma}((f(u)-f_{1}(\hat{w}))A^{\nu} (\kappa|h(x)||z(\theta_{t}\omega)|)dx\leq\int_{\Gamma} \left((f_{1}(u)+f_{2}(u) -f_{1}(\hat{w})\right)A^{\nu}\kappa (|h(x)||z(\theta_{t}\omega)|)dx\notag\\ &\displaystyle\leq \mu_{1}\kappa\left(\int_{\Gamma}\left(1+|\hat{y}|^{\frac{4}{n-2}}+|\hat{w}|^{\frac{4}{n-2}}\right) ^{\frac{2n}{4}}dx\right)^{\frac{4}{2n}}\left(\int_{\Gamma}|\hat{w}|^{\frac{2n}{n-2(1+\nu)}}dx\right)^{\frac{n-2(1+\nu)}{2n}} \|A^{\nu}h(x)\||z(\theta_{t}\omega)|\notag\\ &\;\;\;+\mu_{2}\kappa\left(\int_{\Gamma}\left(1+|\hat{u}|^{\gamma}\right)^{\frac{2n}{n+2(1-\nu)}}dx\right)^{\frac{n+2(1-\nu)}{2n}} \|A^{\nu}h(x)\||z(\theta_{t}\omega)|\notag\\ &\leq \mu_{3}\left(1+\|u\|_{L^{\frac{2n}{n-2}}}^{\frac{4}{n-2}}+\|u\|_{L^{\frac{2n}{n-2}}}^{\frac{4}{n-2}}\right) \|\hat{w}\|_{L^{\frac{2n}{n-2(1+\nu)}}}\| A^{\frac{\nu}{2}}h(x)\|_{L^{\frac{2n}{1+2\nu}}}|z(\theta_{t}\omega)|\notag\\ &\;\;\;+\mu_{4} \left(1+\left\|u\right\|^{\gamma}_{L{\frac{2n}{n+2(1-\nu)}}}\right)\| A^{\frac{\nu}{2}}h(x)\|_{L^{\frac{2n}{1+2\nu}}}|z(\theta_{t}\omega)|\notag\\ &\displaystyle\leq \mu_{5}\left(r^{2}_{1}(\omega) +\left\|A^{\frac{1+\nu}{2}}\hat{w}\right\|^{2}\right) +\mu_{6}\left\|A^{\nu} h(x)\|^{2}| z(\theta_{t}\omega)\right|^{2}. \label{4.53}\end{align}
(75)
and therefore \[\int_{\Gamma}(f'(u)u_{t}-f'_{1}(\hat{w})\hat{w}_{t})A^{\nu}\hat{w} dx=\int_{\Gamma}((f'_{1}(u)-f'_{1}(\hat{w}))u_{t}+f'_{1}(\hat{w})\hat{w}_{t}+f'_{2}(u)u_{t}) A^{\nu}\hat{w} dx. \] Estimate the above inequality, we get
\begin{align} &\displaystyle\int_{\Gamma}(f'_{1}(u)-f'_{1}(\hat{y}))u_{t}A^{\nu}\check{w}dx\leq \mu_{7}\int_{\Gamma}\left(1+|\hat{y}|^{\frac{6-n}{n-2}}+|\hat{w}|^{\frac{6-n}{n-2}}\right)|\hat{w}||A^{\nu}\hat{w}||u_{t}|dx\notag\\ &\leq \mu_{8}\left(\int_{\Gamma}\left(1+|\hat{y}|^{\frac{6-n}{n-2}}+|\hat{w}|^{\frac{6-n}{n-2}}\right)^{\frac{2n}{6-n}}dx\right)^\frac{6-n}{2n} \left(\int_{\Gamma}|u_{t}|^{2}dx\right)^{\frac{1}{2}} \left(\int_{\Gamma}|\hat{w}|^{\frac{2n}{n-2(1+\nu)}}dx\right)^{\frac{n-2(1+\nu)}{2n}} \left(\int_{\Gamma} |A^{\nu}\hat{w}|^{\frac{2n}{n-2(1-\nu)}}dx\right)^{\frac{n-2(1-\nu)}{2n}}\notag\\ &\leq \mu_{9}\left(1+\|\hat{y}\|_{L^{{\frac{2n}{n-2}}}}^{{\frac{6-n}{n-2}}}+\|\hat{w}\|_{L^{\frac{2n}{n-2}}} ^{\frac{6-n}{n-2}}\right)\|u_{t}\|_{L^{2}}\|\hat{w}\|_{L^{\frac{2n}{n-2(1+\nu)}}} \|A^{\nu}\hat{w}\|_{L^{\frac{2n}{n-2(1-\nu)}}}\notag\\ &\leq \mu_{10}\left(\|\hat{w}\|_{L^{\frac{2n}{n-2(1+\nu)}}} \|A^{\nu}\hat{w}\|_{L^{\frac{2n}{n-2(1-\nu)}}}\right) +\mu_{11} \left(\|\hat{y}\|_{L^{{\frac{2n}{n-2}}}}^{{\frac{6-n}{n-2}}}+\|\hat{w}\|_{L^{\frac{2n}{n-2}}} ^{\frac{6-n}{n-2}}\right)\|\hat{w}\|_{L^{\frac{2n}{n-2(1+\nu)}}} \|A^{\nu}\hat{w}\|_{L^{\frac{2n}{n-2(1-\nu)}}}\notag\\ &\leq \mu_{12}r_{2}\left(\omega\right)\left(\|A^{\frac{\nu}{2}}\hat{w}\|^{2}+\|A^{\frac{1+\nu}{2}}\hat{w}\|^{2}\right). \label{4.54}\end{align}
(76)
Similarly, by (45) \(_{\textbf{A}_{2}} \) and (65), we get
\begin{align} &\displaystyle\int_{\Gamma}f'_{1}(\hat{y})\hat{w}_{t}A^{\nu}\hat{w} dx \leq \mu_{13}\left(\int_{\Gamma}\left(1+|\hat{y}|^{\frac{4}{n-2}}\right)^{\frac{2n}{4}}dx\right)^{\frac{4}{2n}}\notag\\ &\times \left(\int_{\Gamma}|\hat{w}_{t}|^{\frac{2n}{n-2(1+\nu)}}dx\right)^{\frac{n-2(1+\nu)}{2n}} \times\left(\int_{\Gamma} |A^{\nu}\hat{w}|^{\frac{2n}{n-2(1-\nu)}}dx\right)^{\frac{n-2(1-\nu)}{2n}}\notag\\ &\leq \mu_{14}(1+\|\hat{y}\|^{\frac{4}{n-2}}_{L^{\frac{2n}{n-2}}})\|\hat{w}_{t}\|_{L^{\frac{2n}{n-2(1+\nu)}}} \|A^{\nu}\hat{w}\|_{L{\frac{2n}{n-2(1-\nu)}}}\notag \\ &\leq \mu_{15} r_{3}\left(\omega\right)(\|A^{\frac{\nu}{2}}\hat{w}\|+|\varepsilon|) \|A^{\frac{1+\nu}{2}}\hat{w}\|_{L{\frac{2n}{n-2(1-\nu)}}}\notag\\ &\leq \mu_{16} r_{3}\left(\omega\right)(\|A^{\frac{\nu}{2}}\hat{w}\|^{2}+|\varepsilon|^{2}) +\frac{\varepsilon}{16}\|A^{\frac{1+\nu}{2}}\hat{w}\|^{2}_{L{\frac{6}{1+2\nu}}}.\label{4.55}\end{align}
(77)
Furthermore, by (45) \(_{\textbf{A}_{3}} \) and (65), note that \(\nu\leq\frac{n+2-(n-2)\gamma}{4} \)
\begin{align} &\displaystyle\int_{\Gamma}f'_{2}(u)u_{t}A^{\nu}\hat{w} dx \leq \mu_{17}\int_{\Gamma}\left(\left(1+|u|^{\gamma}\right)|u_{t}||A^{\nu}\hat{w}|\right)dx\notag\\ &\leq \mu_{18}\left(\int_{\Gamma}\left(1+\|u\|^{\gamma}\right)^{\frac{2n}{n+2(1-\nu)}}dx\right)^{\frac{n+2(1-\nu)}{2n}} \left(\int_{\Gamma}|u_{t}|^{2}dx\right)^{\frac{1}{2}} \left(\int_{\Gamma} |A^{\nu}\hat{w}|^{\frac{2n}{n-2(1-\nu)}}dx\right)^{\frac{n-2(1-\nu)}{2n}}\notag\\ &\leq \mu_{19}\left(1+\| u\|_{L{\frac{2n}{n+2(1-\nu)}}}^{\gamma}\right)\|u_{t}\|_{L^{2}}\|A^{\frac{\nu}{2}}\hat{w}\|_{L{\frac{2n}{n-2(1-\nu)}}}\notag\\ &\leq \mu_{20} r^{2}_{4}\left(\omega\right)+\frac{\varepsilon}{8}\|A^\frac{1+\nu}{2}\hat{w}\|^{2}. \label{4.56}\end{align}
(78)
Including above inequalities together (66)-(78), we achieve
\begin{align}&\displaystyle\frac{1}{2}\frac{d}{dt}\left[\|A^{\frac{\nu}{2}}\check{\chi}\|_{E}^{2}+2(f(u)-f_{1}(\hat{w}))\right] +\frac{\varepsilon}{4}\|A^{\frac{\nu}{2}}\check{\chi}\|_{E}^{2}+\frac{\varepsilon}{2}(f(u)-f_{1}(\hat{w}))\notag\\ &\leq |\kappa| |z(\theta_{t}\omega)|\|A^{\frac{\nu}{2}}\check{\chi}\|_{E}^{2}+\check{\mu}_{C}[1+r^{2}_{1}(\omega)+r^{2}_{2} (\omega)+r^{2}_{3}(\omega)+r^{2}_{4}(\omega)\notag\\ &\;\;\;+\left|z(\theta_{t}\omega)\right|^{2} +\left\|A^{\nu}h(x)\right\|^{2}\left|z(\theta_{t}\omega)\right|^{2}+\|A^{\frac{\nu}{2}}\check{q}(x)\|^{2}] .\label{4.57}\end{align}
(79)
By Gronwall's inequality in (79) on \([0,r] \) and changing \(\omega \) to \(\theta_{-t}\omega \), we deduce that
\begin{align} &\displaystyle\|A^{\frac{\nu}{2}}\bar{\varphi}(t,\theta_{-t}\omega;\varphi_{0})\|_{E}^{2}\leq \left(\|A^{\frac{\nu}{2}}\varphi(r,\theta_{-t}\omega;\varphi_{0})\|_{E}^{2}+2(f(u(r,\theta_{-t}\omega;\chi _{0}))\right.\left.-f_{1}(\hat{w}(r,\theta_{-t}\omega;\chi_{0}))\right)\notag\\ &\leq\left(\|A^{\frac{\nu}{2}}\check{\chi}\|_{E}^{2}+ (f(u)-f_{1}(\hat{w}))\right)exp^{2\int_{r}^{0}\left(\sigma-|\kappa||z(\theta_{s}\omega)|\right)(s,\omega)ds}+\int_{0}^{r} \varrho_{1}(\theta_{s}\omega)exp^{2\int_{r}^{s}\left(\sigma-|\kappa||z(\theta_{\varsigma}\omega)|\right) (\varsigma,\omega)d\varsigma}ds.\label{4.58}\end{align}
(80)
We can choose \(\varrho_{1}(\theta_{t}\omega) \) and \(\check{\mu}_{C} \) depends to \(\left[\varepsilon,\delta,\kappa,\alpha_{0},\alpha_{1},\alpha_{2}, m_{0},\mu_{i}\right] \) are positive constants, such that
\begin{align} &\displaystyle\varrho_{1}(\theta_{t}\omega)=\check{\mu}_{C}[1+r^{2}_{1}(\omega)+r^{2}_{2}(\omega) +r^{2}_{3}(\omega)+r^{2}_{4}(\omega)\notag\\&\;\;\;+\left\|A^{\frac{\nu}{2}}h(x)\right\|^{2}\left|z(\theta_{t}\omega)\right|^{2} +\left\|A^{\frac{\nu}{2}}h(x)\right\|^{2}\left|z(\theta_{t}\omega)\right|^{2} +\|A^{\frac{\nu}{2}}\check{q}(x)\|^{2}].\label{4.59}\end{align}
(81)
Note that \begin{align*} &\displaystyle\int_{\Gamma}((f(u)-f_{1}(\hat{w}))A^{\frac{\nu}{2}}\hat{w}dx \leq\int_{\Gamma} \left((f_{1}(u)+f_{2}(u) -f_{1}(\hat{w})\right)A^{\frac{\nu}{2}}\hat{w}dx\\ &\displaystyle\leq \mu_{21}\left(\int_{\Gamma}\left(1+|\hat{y}|^{\frac{4}{n-2}}+|\hat{w}|^{\frac{4}{n-2}}\right)\left|\hat{w}\right|\left|A^{\frac{\nu}{2}}\hat{w}\right|dx\right) +\mu_{22}\left(\int_{\Gamma}\left(1+|\hat{u}|^{\gamma}\right)\left|A^{\frac{\nu}{2}}\hat{w}\right|dx\right). \end{align*} Thus, by the Sobolev embedding
\begin{align} &\displaystyle \left(\int_{\Gamma}\left(1+|\hat{y}|^{\frac{4}{n-2}}+|\hat{w}|^{\frac{4}{n-2}}\right)\right)\left|\hat{w}\right| +\left(\int_{\Gamma}\left(1+|\hat{u}|^{\gamma}\right)dx\right)\left|A^{\frac{\nu}{2}}\hat{w}\right|dx\notag\\ &\displaystyle\leq \mu_{23}\left(\int_{\Gamma}\left(1+|\hat{y}|^{\frac{4}{n-2}}+|\hat{w}|^{\frac{4}{n-2}}\right)^{\frac{2n}{4}}dx\right) ^{\frac{4}{2n}}\left(\int_{\Gamma}|\hat{w}|^{\frac{2n}{n-2(1+\nu)}}dx\right)^{\frac{n-2(1+\nu)}{2n}} \left(\int_{\Gamma}|A^{\nu}\hat{w}|^{\frac{2n}{n-2(1-\nu)}}dx\right)^{\frac{n-2(1-\nu)}{2n}}\notag\\ &\;\;\;+\mu_{24}\left(\int_{\Gamma}\left(1+|\hat{u}|^{\gamma}\right)^{\frac{2n}{n+2(1-\nu)}}dx\right)^{\frac{n+2(1-\nu)}{2n}} \times\left(\int_{\Gamma}|A^{\nu}\hat{w}|^{\frac{2n}{n-2(1-\nu)}}dx\right)^{\frac{n-2(1-\nu)}{2n}}\notag\\ &\displaystyle\leq \mu_{25}r^{2}_{5}(\omega) +\mu_{25}\left\|A^{\frac{1+\nu}{2}}\hat{w}\right\|^{2}, \label{4.60}\end{align}
(82)
where the constants \(\mu_{i},i=1,2,.....,25 \), comes from the embedding \(D(A^{\frac{1+\nu}{2}})\hookrightarrow L^\frac{2n}{n-2(1-\nu)} \), \(D(A^{\frac{1-\nu}{2}})\hookrightarrow L^\frac{2n}{n-2(1+\nu)} \) and \(\mathbf{V}_{1}=H_{0}^{1}\hookrightarrow L^{^\frac{2n}{n-2}} \).

Note that, \(|z(\theta_{\varsigma}\omega)| \) is tempered, and hence applying the inequalities (81) and (82) in (80), the integrand of the second term on the righthand side of (80) is convergent to zero exponentially as \(r\hookrightarrow-\infty \) . Then, we can shows that the following result

\[~~~~~~~~~~\|A^{\frac{\nu}{2}}\check{\chi}(t,\theta_{-t}\omega;\chi_{0})\|_{E}^{2}\leq \check{M}^{2}_{2}(\omega).~~~~~~~~~~~~~~~~~~~~~~~ \] The proof is complete.

Now we obtain our main result about the existence of a random attractor for random dynamical system \(\Phi \) as following Lemma. It follows from Lemma 2, that \(\Phi \) has a closed random absorbing set in \(\mathcal{D} \), then apply Lemmas in Section 4, we prove the existence of a random attractor by using tail estimates and the decompose technique of solutions. which along with the \(\mathcal{D} \)-pullback asymptotic compactness.

Lemma 5.(see[2,3,15]) Let \(\mathbf{X_{0}},\mathbf{X},\mathbf{X_{1}} \) be three Banach spaces such that \(\mathbf{X_{0}}\hookrightarrow \mathbf{X}\hookrightarrow \mathbf{X_{1}} \) is projection operator \(\mathbf{X_{0}}\hookrightarrow \mathbf{X} \) is compact. setting \(Y=\chi(t,\hat{B}(\tau,\omega))\subset L^{2}_{\mu}(\mathbb{R}^{+},\mathbf{X}) \) is a random bounded absorbing set from Lemma 4, \(\psi(t) \) is the solution operators of (53) and by Lemma 4, there is a positive random radius \(M_{\nu}(\omega) \) dependent on \(t \), such that

\begin{equation} \left.\begin{aligned} &\ 1).\;\;\;~~Y~ is~ bounded~ in~ L_{\mu}^{2}(\mathbb{R}^{+},\mathbf{X_{0}})\bigcap H_{\mu}^{1}(\mathbb{R}^{+},\mathbf{X_{1}}),\\ &\ 2).\;\;\;~\sup_{\eta\in Y,s\in\mathbb{R}^{+}}\|\nabla\eta(s)\|_{\mathbf{X}}^{2}\leq M_{\nu}(\omega). \end{aligned}\right\}~\label{4.61}\end{equation}
(83)
Then \(Y \) is relatively compact in \(L_{\mu}^{2}(\mathbb{R}^{+},\mathbf{X}) \). Further, for every \(\tau\in\mathbb{R},\omega\in\Omega,t\geq0 \), so that
\begin{equation} \check{\eta}(t,\tau,\theta_{-t}\omega,\chi_{0}(\theta_{-t}\omega),s) =\left\{\begin{aligned} &\ \hat{w}(t,\tau,\theta_{-t}\omega,\chi_{0}(\theta_{-t}\omega))-\hat{w}(t-s,\tau,\theta_{-t+s}\omega,\chi_{0}(\theta_{-t+s}\omega)),~s\leq t,\\ &\ \hat{w}(t,\tau,\theta_{-t}\omega,\chi_{0}(\theta_{-t}\omega)),~t\leq s; \end{aligned}\right.\label{4.62}\end{equation}
(84)
\begin{equation} \check{\eta}_{s}(t,\tau,\theta_{-t}\omega,\chi_{0}(\theta_{-t}\omega)) =\left\{\begin{aligned} &\ \hat{w}_{t}(t-s,\tau,\theta_{-t+s}\omega,\chi_{0}(\theta_{-t+s}\omega)),~0\leq s\leq t,\\ &\ 0,~~~t\leq s. \end{aligned}\right.~\label{4.63}\end{equation}
(85)
Denote by \(\check{B} \) the closed ball of \(L_{\mu}^{2}(\mathbb{R}^{+},\mathbf{X_{0}})\bigcap H_{\mu}^{1}(\mathbb{R}^{+},\mathbf{X_{1}}) \) of random variable radius \(M_{0}(\omega) \), since we apply on a finite domain. \(\check{B} \) is compact subset of \(\mathbf{X_{0}}\times \mathbf{X_{1}} \). Let a set \(\check{B}(\tau,\omega) \)
\begin{equation}\check{B}(\tau,\omega)=\overline{\mathbb{\bigcup}_{\hat{\chi}_{\tau-t}(\theta_{-\tau}\omega)\in \check{B}(\theta_{-t}\omega)}\mathbb{\bigcup}_{t\geq 0}\check{\eta}(\tau,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}(\theta_{-\tau}\omega),s)}~s\in\mathbb{R^{+}} ~\tau\in\mathbb{R}~,\omega\in\Omega,~\label{4.64}\end{equation}
(86)
where \(\nu \) is as in (65). Thus, employing (3), Lemma 2, Lemma 4 and (84), we get that
\begin{equation}\sup_{\eta\in \check{B},s\in\mathbb{R}^{+}}\|\nabla\eta(s)\|_{\mu}^{2}=\sup_{t\geq0}\sup_{\chi_{\tau-t}(\theta_{-\tau}\omega)\in B(\theta_{-t}\omega),s\in\mathbb{R}^{+}}\|\nabla\check{\eta}(\tau,\tau-t,(\theta_{-\tau}\omega),\chi_{\tau-t}(\theta_{-\tau}\omega),s)\|^{2} \leq M_{0}(\omega),~\label{4.65}\end{equation}
(87)
implying that
\begin{equation}\|\nabla\eta(s)\|_{\mu}^{2}=\int_{\tau}^{+\infty}\mu(s)\|\nabla\eta(s)\|^{2}ds\leq M_{0}(\omega)\int_{\tau}^{+\infty}e^{\sigma s}ds\leq\frac{M_{0}(\omega)}{\sigma}.~\label{4.66}\end{equation}
(88)
We obtain our main result about the existence of a random attractor for random dynamical system \(\Phi \) as following Theorem.

Theorem 3. Suppose (2)-(4) hold. Then the continuous cocycle \(\Phi \) associated with Problem (16), has a unique \(\mathcal{D} \)-pullback random attractor \(\mathcal{A}(\tau,\omega)\in\mathcal{D} \) in \(\Gamma \).

Proof. For any \((\tau~,\omega)\in(\mathbb{R} \times\Omega) \). Let \(\hat{\chi}_{\tau-t}(\theta_{-\tau}\omega)\in \hat{B}(\tau,\theta_{-t}\omega) \), \(\check{B}\subset \hat{B}(\theta_{-t}\omega) \) is compact in \(\mathcal{D}(E) \). It follows that \(\check{B} \) be the closed ball of \(\mathbf{V}_{2\nu+1}\times\mathbf{V}_{2\nu}\hookrightarrow E \) is compact of radius \(\hat{M}(\omega)\in\mathcal{D}(E) \), where \(\nu \) satisfy (65). Therefore, \(\Lambda(\tau, \omega) \) is compact in \(E \) for any bounded non-random set B of \(E \). By Lemma 3 and \(\chi_{\tau-t}(\theta_{-\tau }\omega)\in \check{B}(\tau,\theta_{-t}\omega) \), we have \(\chi_{\tau-t}=\check{\chi}_{\tau-t}-\hat{\chi}_{\tau-t}\in \Lambda(\tau,\omega) \), where \(\chi_{\tau-t} \) is given by (50). Then, there exists a random set \(\hat{M}(\omega)\in \check{B}\subseteq B(\tau,\omega)\in \mathcal{D}(E) \), as follows

\begin{equation}d_{H}\left(\Phi(t,\tau,\theta_{-t}\omega,B(\tau,\theta_{-t}\omega)),\Lambda(\tau,\omega)\right)\leq \hat{M}(\omega)e^{-\sigma t}~\rightarrow 0,~ as~t\rightarrow +\infty.\label{4.67}\end{equation}
(89)
From Lemma 3, there exists \(\hat{T}=\hat{T}(\tau,\omega,B)\geq 0 \), then we dedicate the following attraction property \[\hat{\chi}(\tau,\tau-t,\theta_{-\tau}\omega,B(\tau,\theta_{-t}\omega))\subseteq B_{0}(\tau,\omega),\forall t\geq \hat{T}. \] Let \(t\geq \hat{T} \) and \(\check{T}= t-\hat{T}\geq T(\tau,\omega,B_{0})\geq 0 \) using cocycle property (iii) of \(\Phi \), we show that
\begin{align}&\displaystyle\hat{\chi}(\tau,\tau-t,\theta_{-\tau}\omega,B(\tau-t,\theta_{-\tau}\omega))\notag\\ &=\hat{\chi}(t,\tau-\check{T}-\hat{T},\theta_{-t}\omega,B(\tau-\check{T}-\hat{T},\theta_{-\tau}\omega))\notag\\ &=\hat{\chi}(\tau,\tau-\check{T},(\theta_{-\tau}\omega),\chi(\tau-\check{T}, \tau-\hat{T}-\check{T},\theta_{-\tau}\omega,B(\tau-\check{T}-\hat{T},\theta_{-\tau}\omega))\notag\\ &\subseteq \hat{\chi}(\tau,\tau-\check{T},\theta_{-\tau}\omega,B_{0}(\theta_{-\check{T}}\omega))\subseteq \hat{B}(\tau,\theta_{-\tau}\omega).\label{4.68}\end{align}
(90)
Take any \(\hat{\chi}(\tau,\tau-t,(\theta_{-\tau}\omega),\chi_{\tau-t}(\theta_{-\tau}\omega))\in \hat{\chi}(\tau,\tau-t,\theta_{-\tau}\omega,B(\tau-t,\theta_{-t}\omega)), \) for \(t\geq \hat{T}+T(\tau,\omega,B_{0}) \), where \(\hat{\chi}_{\tau-t}(\theta_{-\tau}\omega)\in B(\tau-t,\theta_{-t}\omega) \). It follow to Lemmas 2, 3 and (90), such that
\begin{align}&\displaystyle\inf_{\eta\in \Lambda(\tau,\omega)}\|\chi(\tau,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}(\theta_{-\tau}\omega))- \eta(\tau,\tau-t,\theta_{-\tau}\omega,\chi_{\tau-t}(\theta_{-\tau}\omega))\|^{2}_{E}\notag\\ &\leq\|y(\tau,\tau-t,\theta_{-\tau}\omega,y_{\tau-t}(\theta_{-\tau}\omega))\|^{2}_{E} \leq \hat{M}^{2}(\omega)e^{-\sigma_{3} t}~,~\forall t>\hat{T}+T(\tau,\omega,B_{0}).\label{4.69}\end{align}
(91)
Thus from the relation (19) between \(\hat{\mathbf{\Phi}},\check{\mathbf{\Phi}} \), one could easily obtain that for any nonrandom bounded
\begin{equation}d_{H}(\bar{\mathbf{\Phi}}(t,\tau,\theta_{-t}\omega,B(\tau,\theta_{-t}\omega)),\Lambda(\tau,\omega))\leq \hat{M}(\omega)e^{-\sigma_{3} t}~\rightarrow 0~ as~t\rightarrow +\infty~.~\label{4.70}\end{equation}
(92)
Follows from Lemma 1, Lemma 2, Lemma 3 and Lemma 4, \(\Phi \) related to (16) possesses a \(\mathcal{D} \) pull-back random attractor \(\mathcal{A}(\tau,\omega)\subseteq\Lambda(\tau,\omega)\bigcap B_{0}(\omega) \). The proof is completed.

Acknowledgments

The authors would like to thank the referee for improving the readability of the paper.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflict of Interests

The authors declare no conflict of interest.

References

  1. Dafermos, C. M. (1970). Asymptotic stability in viscoelasticity. Archive for Rational Mechanics and Analysis, 37(4), 297-308. [Google Scholor]
  2. Borini, S., & Pata, V. (1999). Uniform attractors for a strongly damped wave equation with linear memory. Asymptotic Analysis, 20(3-4), 263-277. [Google Scholor]
  3. Pata, V., & Zucchi, A. (2001). Attractors for a damped hyperbolic equation with linear memory. Advances in Mathematical Sciences and Applications, 11, 505–529. [Google Scholor]
  4. Ma, Q., & Zhong, C. (2004). Existence of strong global attractors for hyperbolic equation with linear memory. Applied Mathematics and Computation, 157(3), 745-758. [Google Scholor]
  5. Park, J. Y., & Kang, J. R. (2010). Global attractor for hyperbolic equation with nonlinear damping and linear memory. Science China Mathematics, 53(6), 1531-1539. [Google Scholor]
  6. Arnold, L., & Chueshov, I. (1998). Order-preserving random dynamical systems: equilibria, attractors, applications. Dynamics and Stability of Systems, 13(3), 265-280. [Google Scholor]
  7. Flandoli, F., & Schmalfuss, B. (1996). Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative white noise. Stochastics: An International Journal of Probability and Stochastic Processes, 59(1-2), 21-45. [Google Scholor]
  8. Crauel, H., Debussche, A., & Flandoli, F. (1997). Random attractors. Journal of Dynamics and Differential Equations, 9(2), 307-341. [Google Scholor]
  9. Temam, R. (1997). Infinite-dimensional dynamical systems in mechanics and physics. Springer-Verlag, New York.[Google Scholor]
  10. Wang, B. (2012). Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems. Journal of Differential Equations, 253(5), 1544-1583. [Google Scholor]
  11. Fan, X. (2004). Random attractor for a damped sine-Gordon equation with white noise. Pacific Journal of Mathematics, 216(1), 63-76. [Google Scholor]
  12. Yin, F., & Liu, L. (2014). D-pullback attractor for a non-autonomous wave equation with additive noise on unbounded domains. Computers & Mathematics with Applications, 68(3), 424-438. [Google Scholor]
  13. Crauel, H., & Flandoli, F. (1998). Hausdorff dimension of invariant sets for random dynamical systems. Journal of Dynamics and Differential Equations, 10(3), 449-474. [Google Scholor]
  14. Yang, M., Duan, J., & Kloeden, P. (2011). Asymptotic behavior of solutions for random wave equations with nonlinear damping and white noise. Nonlinear Analysis: Real World Applications, 12(1), 464-478.[Google Scholor]
  15. Zhou, S., & Zhao, M. (2015). Random attractors for damped non-autonomous wave equations with memory and white noise. Nonlinear Analysis: Theory, Methods & Applications, 120, 202-226.[Google Scholor]
  16. Wang, Z., & Zhou, S. (2015). Asymptotic behavior of stochastic strongly wave equation on unbounded domains. Journal of Applied Mathematics and Physics, 3(03), 338-357. [Google Scholor]
  17. Ball, J. M. (1973). Stability theory for an extensible beam. Journal of Differential Equations, 14(3), 399-418. [Google Scholor]
  18. Kang, J. R. (2011). Global attractor for an extensible beam equation with localized nonlinear damping and linear memory. Mathematical Methods in Applied Sciences, 34(12), 1430-1439. [Google Scholor]
  19. Park, J. Y., & Kang, J. R. (2011). Uniform attractor for non-autonomous suspension bridge equations with localized damping. Mathematical Methods in Applied Sciences, 34(4), 487-496. [Google Scholor]
  20. Xu, L., & Ma, Q. (2015). Existence of random attractors for the floating beam equation with strong damping and white noise. Boundary Value Problems, 2015(1), 1-13. [Google Scholor]
  21. Pazy, A. (2012). Semigroups of linear operators and applications to partial differential equations (Vol. 44). Springer Science & Business Media. [Google Scholor]
]]>
On the non-linear diophantine equation \({\boldsymbol{379}}^{\boldsymbol{x}}\boldsymbol{+}{\boldsymbol{397}}^{\boldsymbol{y}}\boldsymbol{=}{\boldsymbol{z}}^{\boldsymbol{2}}\) https://old.pisrt.org/psr-press/journals/oms-vol-4-2020/on-the-non-linear-diophantine-equation/ Mon, 30 Nov 2020 10:04:22 +0000 https://old.pisrt.org/?p=4726
OMS-Vol. 4 (2020), Issue 1, pp. 397 - 399 Open Access Full-Text PDF
Sudhanshu Aggarwal, Nidhi Sharma
Abstract: In this article, authors discussed the existence of solution of non-linear diophantine equation \({379}^x+{397}^y=z^2,\) where \(x,y,z\) are non-negative integers. Results show that the considered non-linear diophantine equation has no non-negative integer solution.
]]>

Open Journal of Mathematical Sciences

On the non-linear diophantine equation \({\boldsymbol{379}}^{\boldsymbol{x}}\boldsymbol{+}{\boldsymbol{397}}^{\boldsymbol{y}}\boldsymbol{=}{\boldsymbol{z}}^{\boldsymbol{2}}\)

Sudhanshu Aggarwal\(^1\), Nidhi Sharma
Department of Mathematics, National P. G. College, Barhalganj, Gorakhpur-273402, U. P., India.; (S.A)
Indian Institute of Technology Roorkee-247667, U. K., India.; (N.S)
\(^{1}\)Corresponding Author: sudhanshu30187@gmail.com

Abstract

In this article, authors discussed the existence of solution of non-linear diophantine equation \({379}^x+{397}^y=z^2,\) where \(x,y,z\) are non-negative integers. Results show that the considered non-linear diophantine equation has no non-negative integer solution.

Keywords:

Prime number, diophantine equation, solution, integers.

1. Introduction

Diophantine equations are those equations which are to be solved in integers. Diophantine equations are very important equations of theory of numbers and have many important applications in algebra, analytical geometry and trigonometry. These equations give us an idea to prove the existence of irrational numbers [1,2]. Acu [3] studied the diophantine equation \(2^x+5^y=z^2\) and proved that \(\left\{x=3,y=0,z=3\ \right\}\) and \(\left\{x=2,y=1,z=3\ \right\}\) are the solutions of this equation. Kumar et al., [4] considered the non-linear diophantine equations \({61}^x+{67}^y=z^2\) and \({67}^x+{73}^y=z^2\). They showed that these equations have no non-negative integer solution. Kumar et al. [5] studied the non-linear diophantine equations \({31}^x+{41}^y=z^2\) and \({61}^x+{71}^y=z^2\) and determined that these equations have no non-negative integer solution. Rabago [6] discussed the open problem given by B. Sroysang. He showed that the diophantine equation \(8^x+p^y=z^2,\) where \(x,y,z\) are positive integers has only three solutions namely \(\left\{x=1,y=1,z=5\ \right\},\) \(\left\{x=2,y=1,z=9\ \right\}\) and \(\left\{x=3,y=1,z=23\ \right\}\) for \(p=17\). The diophantine equations \(8^x+{19}^y=z^2\) and \(8^x+{13}^y=z^2\) were studied by Sroysang [7,8]. He proved that these equations have a unique non-negative integer solution namely \(\left\{x=1,y=0,z=3\ \right\}\). Sroysang [9] proved that the diophantine equation \({31}^x+{32}^y=z^2\) has no non-negative integer solution.

The main aim of this article is to discuss the existence of solution of non-linear diophantine equation \({379}^x+{397}^y=z^2,\) where \(x,y,z\) are non-negative integers.

2. Preliminary

Lemma 1. The non-linear diophantine equation \({379}^x+1=z^2,\) where \(x,z\) are non-negative integers, has no solution in non-negative integers.

Proof. Since \(379\) is an odd prime, so \({379}^x\) is an odd number for all non-negative integer \(x\), which implies \({379}^x+1=z^2\) is an even number for all non-negative integer \(x\), so \(z\) is an even number. Hence

\begin{equation} \label{GrindEQ__1_} {z}^2=0\left(mod3\right) \;\;\;\text{or}\;\;\; z^2=1\left(mod3\right). \end{equation}
(1)
Now, \( 379=1\left(mod3\right)\), which implies \({379}^x=1\left(mod3\right),\) for all non-negative integer \(x\), so \({379}^x+1=2\left(mod3\right),\) for all non-negative integer \(x\). Hence
\begin{equation} \label{GrindEQ__2_} z^2=2\left(mod3\right). \end{equation}
(2)
Equation (2) contradicts Equation (1). Hence non-linear diophantine equation \({379}^x+1=z^2\) has no non-negative integer solution.

Lemma 2. The non-linear Diophantine equation \({397}^y+1=z^2,\) where \(y,z\) are for all non-negative integers, has no solution in non-negative integers.

Proof. Since \(397\) is an odd prime, so \({397}^y\) is an odd number for all non-negative integer \(y\), which implies \({397}^y+1=z^2\) is an even number for all non-negative integer \(y\), so \(z\) is an even number. Hence

\begin{equation} \label{GrindEQ__3_} {\Rightarrow z}^2=0\left(mod3\right) \mathrm{or} z^2=1\left(mod3\right). \end{equation}
(3)
Now, \(397=1\left(mod3\right)\), implies \({397}^y=1\left(mod3\right),\) for all non-negative integer \(y\), so \({397}^y+1=2\left(mod3\right),\) for all non-negative integer \(y\). Hence
\begin{equation} \label{GrindEQ__4_} z^2=2\left(mod3\right). \end{equation}
(4)
Equation (4) contradicts Equation (3). Hence non-linear Diophantine equation \({397}^y+1=z^2\) has no non-negative integer solution.

Theorem 1(Main theorem). The non-linear diophantine equation \({379}^x+{397}^y=z^2,\) where \(x,y,z\) are non-negative integers, has no solution in non-negative integers.

Proof. There are following four cases;

  • Case 1. If \(x=0\) then the non-linear diophantine equation \({379}^x+{397}^y=z^2\) becomes \(1+{397}^y=z^2\), which has no non-negative integer solution by Lemma 2.
  • Case 2. If \(y=0\) then the non-linear diophantine equation \({379}^x+{397}^y=z^2\) becomes \({379}^x+1=z^2\), which has no non-negative integer solution by Lemma 1.
  • Case 3. If \(x,y\) are positive integers, then \({379}^x,{397}^y\) are odd numbers, implies \( {379}^x+{397}^y=z^2\) is an even number, so \(z\) is an even number, Hence
    \begin{equation} \label{GrindEQ__5_} {z}^2=0\left(mod3\right) or z^2=1\left(mod3\right). \end{equation}
    (5)
                 Now, \(379=1\left(mod3\right)\), implies \({379}^x=1\left(mod3\right)\) and \({397}^y=1\left(mod3\right)\), so \({379}^x+{397}^y=2\left(mod3\right)\). Hence
    \begin{equation} \label{GrindEQ__6_} z^2=2\left(mod3\right). \end{equation}
    (6)
                 Equation (6) contradicts Equation (5). Hence non-linear Diophantine equation \({379}^x+{397}^y=z^2\) has no non-negative integer solution.
  • Case 4. If \(x,y=0\), then \({379}^x+{397}^y=1+1=2=z^2\), which is not possible because \(z\) is a non-negative integer. Hence diophantine equation \({379}^x+{397}^y=z^2\) has no non-negative integer solution.

3. Conclusion

In this article, authors successfully discussed the solution of non-linear diophantine equation \({379}^x+{397}^y=z^2\), where \(x,y,z\) are non-negative integers and determined that this non-linear equation has no non-negative integer solution.

Authorcontributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflict of Interests

''The authors declare no conflict of interest.''

References

  1. Mordell, L.J. (1969). Diophantine Equations. Academic Press, London, New York. [Google Scholor]
  2. Sierpinski, W. (1964). Elementary Theory of Numbers. Warszawa. [Google Scholor]
  3. Acu, D. (2007). On a Diophantine equation \(2^x+5^y=z^2.\) General Mathematics, 15(4), 145-148. [Google Scholor]
  4. Kumar, S., Gupta, S., & Kishan, H. (2018). On the non-linear diophantine equations \({61}^x+{67}^y=z^2\) and \({67}^x+{73}^y=z^2\). Annals of Pure and Applied Mathematics, 18(1), 91-94. [Google Scholor]
  5. Kumar, S., Gupta, D., & Kishan, H. (2018). On the non-linear diophantine equations \({31}^x+{41}^y=z^2\) and \({61}^x+{71}^y=z^2\). Annals of Pure and Applied Mathematics, 18(2), 185-188. [Google Scholor]
  6. Rabago, J. F. T. (2013). On an open problem by B. Sroysang. Konuralp Journal of Mathematics, 1(2), 30-32. [Google Scholor]
  7. Sroysang, B. (2012). More on the Diophantine equation \(8^x+{19}^y=z^2.\) International Journal of Pure and Applied Mathematics, 81(4), 601-604. [Google Scholor]
  8. Sroysang, B. (2014). On the Diophantine equation \(8^x+{13}^y=z^2.\) International Journal of Pure and Applied Mathematics, 90(1), 69-72. [Google Scholor]
  9. Sroysang, B. (2012). On the Diophantine equation \({{31}}^x+{{32}}^y=z^2.\) International Journal of Pure and Applied Mathematics, 81(4), 609-612. [Google Scholor]
]]>