OMA – Vol 4 – Issue 1 (2020) – PISRT https://old.pisrt.org Tue, 07 Jul 2020 06:50:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 Turing instability for a attraction-repolsion chemotaxis system with logistic growth https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/turing-instability-for-a-attraction-repolsion-chemotaxis-system-with-logistic-growth/ Mon, 22 Jun 2020 17:25:23 +0000 https://old.pisrt.org/?p=4200
OMA-Vol. 4 (2020), Issue 1, pp. 98 - 118 Open Access Full-Text PDF
Abdelhakam Hassan Mohammed, Shengmao Fu
Abstract: In this paper, we investigate the nonlinear dynamics for an attraction-repulsion chemotaxis Keller-Segel model with logistic source term \(u_{1t}=d_{1}\Delta{u_{1}}-\chi \nabla (u_{1}\nabla{u_{2}})+ \xi{ \nabla (u_{1}\nabla{u_{3}})}+\mathbf g(u),{\mathbf x}\in\mathbb{T}^{d}, t>0,\) \( u_{2t}=d_{2}\Delta{u_{2}}+\alpha u_{1}-\beta u_{2},{\mathbf x}\in\mathbb{T}^{d}, t>0,\) \(u_{3t}=d_{3}\Delta{u_{3}}+\gamma u_{1}- \eta u_{3},{\mathbf x}\in\mathbb{T}^{d}, t>0,\) \( \frac{\partial{u_{1}}}{\partial{x_{i}}}=\frac{\partial{u_{2}}}{\partial{x_{i}}}=\frac{\partial{u_{3}}}{\partial{x_{i}}}=0,x_{i}=0,\pi, 1\leq i\leq d,\) \( u_{1}(x,0)=u_{10}(x), u_{2}(x,0)=u_{20}(x), u_{3}(x,0)=u_{30}(x), {\mathbf x}\in\mathbb{T}^{d} (d=1,2,3).\) Under the assumptions of the unequal diffusion coefficients, the conditions of chemotaxis-driven instability are given in a \(d\)-dimensional box \(\mathbb{T}^{d}=(0,\pi)^{d} (d=1,2,3)\). It is proved that in the condition of the unique positive constant equilibrium point \({\mathbf w_{c}}=(u_{1c},u_{2c},u_{3c})\) of above model is nonlinearly unstable. Moreover, our results provide a quantitative characterization for the early-stage pattern formation in the model.
]]>

Open Journal of Mathematical Analysis

Turing instability for a attraction-repolsion chemotaxis system with logistic growth

Abdelhakam Hassan Mohammed\(^1\), Shengmao Fu
Faculty of Petroleum and Hydrology Engineering, Peace University, Almugled, West Kordofan, Sudan.; (A.H.M)
College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, P.R. China.; (S.F)
\(^1\)Corresponding Author: abd111hakam@gmail.com

Abstract

In this paper, we investigate the nonlinear dynamics for an attraction-repulsion chemotaxis Keller-Segel model with logistic source term
\(u_{1t}=d_{1}\Delta{u_{1}}-\chi \nabla (u_{1}\nabla{u_{2}})+ \xi{ \nabla (u_{1}\nabla{u_{3}})}+\mathbf g(u),{\mathbf x}\in\mathbb{T}^{d}, t>0,\)
\( u_{2t}=d_{2}\Delta{u_{2}}+\alpha u_{1}-\beta u_{2},{\mathbf x}\in\mathbb{T}^{d}, t>0,\)
\(u_{3t}=d_{3}\Delta{u_{3}}+\gamma u_{1}- \eta u_{3},{\mathbf x}\in\mathbb{T}^{d}, t>0,\)
\( \frac{\partial{u_{1}}}{\partial{x_{i}}}=\frac{\partial{u_{2}}}{\partial{x_{i}}}=\frac{\partial{u_{3}}}{\partial{x_{i}}}=0,x_{i}=0,\pi, 1\leq i\leq d,\)
\( u_{1}(x,0)=u_{10}(x), u_{2}(x,0)=u_{20}(x), u_{3}(x,0)=u_{30}(x), {\mathbf x}\in\mathbb{T}^{d} (d=1,2,3).\)
Under the assumptions of the unequal diffusion coefficients, the conditions of chemotaxis-driven instability are given in a \(d\)-dimensional box \(\mathbb{T}^{d}=(0,\pi)^{d} (d=1,2,3)\). It is proved that in the condition of the unique positive constant equilibrium point \({\mathbf w_{c}}=(u_{1c},u_{2c},u_{3c})\) of above model is nonlinearly unstable. Moreover, our results provide a quantitative characterization for the early-stage pattern formation in the model.

Keywords:

Attraction-repolsion chemotaxis, logistic source, pattern formation, nonlinear instability.

1. Introduction

In this paper , we deal with attraction-repolsion chemotaxis system

\begin{equation}\label{a1} \begin{cases} \displaystyle u_{1t}=d_{1}\Delta{u_{1}}-\chi \nabla (u_{1}\nabla {u_{2}})+ \xi{ \nabla (u_{1}\nabla {u_{3}})}+\mathbf g(u),&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{2t}=d_{2}\Delta{u_{2}}+\alpha u_{1}-\beta u_{2},&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{3t}=d_{3}\Delta{u_{3}}+\gamma u_{1}- \eta u_{3},&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle \frac{\partial{u_{1}}}{\partial{x_{i}}}=\frac{\partial{u_{2}}}{\partial{x_{i}}}=\frac{\partial{u_{3}}}{\partial{x_{i}}}=0,&x_{i}=0,\pi, 1\leq i\leq d,\\ \displaystyle u_{1}(x,0)=u_{10}(x), u_{2}(x,0)=u_{20}(x), u_{3}(x,0)=u_{30}(x),&{\mathbf x}\in\mathbb{T}^{d} (d=1,2,3). \end{cases} \end{equation}
(1)
in a \(d\)-dimensional box \(\mathbb{T}^{d}=(0,\pi)^{d} (d=1,2,3)\) is a bounded domain with smooth boundary \(\alpha,\beta,\mu,\chi,\xi,\beta,\gamma,\eta>0\). In the model (1) \(u_{1}\), \(u_{2}\) and \(u_{3}\) represent the cell density, the concentration of the chemoattractant (attractive signal) and the concentration of the chemorepellent (repulsive signal) respectively, \(\mathbf g(u)\) is logistic source. The classical Keller-Segel system can be obtained by setting \(d_{i} = 1 ,(i=1,2,3), \xi = 0, u_{3}\equiv0, \mathbf g(u)\equiv 0 \) in (1) which models the mechanism of chemotaxis and has been extensively studied since 1970, we refer to [1, 2, 3, 4] and the references therein. Apart form the aforementioned system a source of logistic type is included in (1) to describe the spontaneous growth of cells. The effect of preventing ultimate growth has been widely studied.

Chemotaxis is a chemosensitive movement of species which may detect and respond to chemical substances in the environment. The first model about chemotaxis was proposed by Keller and Segel [5]

\begin{equation}\label{a2} \begin{cases} \displaystyle \frac{\partial u}{\partial t}=\Delta{u}-\chi \nabla (u\nabla {v}),& {\mathbf x }\in\Omega,\\ \displaystyle \frac{\partial v}{\partial t}=\Delta{v}-v+u,&{\mathbf x}\in\Omega, \end{cases} \end{equation}
(2)
which describes the aggregation process of the slime mold formation in Dictyostelium Discoidium, where \(v\) denotes the chemical concentration and u is the concentration of species. For this system, there have been abundant results. Osaki and Yagi [6] found that when \(n = 1\), all the solutions are global and bounded. When \(n \geq 2\), blow-up may happen (see Horstmann and Wang [7]; Herrero et al., [8]; Winkler et al., [9]). A detailed introduction into the mathematic of the Keller-Segel model for chemotaxis is presented in Horstmann [1, 10, 11].

In the study of chemotaxis-diffusion-growth models, the pattern dynamics is another mathematically challenging and physically important research project (see Tello and Winkler [12], Aida and Yagi [13], Kurata et al., [14], Painter and Hillen [15], Okuda and Osaki [16], Kuto et al., [17] and Banerjee et al., [18]. Guo and Hwang [19] investigated nonlinear dynamics near an unstable constant equilibrium in the classical Keller-Segel model. Their result can be interpreted as a rigorous mathematical characterization for pattern formation in the Keller-Segel model. By using the similar method, Fu and Liu [20] proved that the linear unstable positive constant equilibrium in the Keller-Segel model with a logistic source is also unstable in the full nonlinear sense. The emergence of patterns is a phenomenon frequently observed in the physical world [21].

Many authors have investigated the formation of patterns by using self-diffusive reaction-diffusion models [21, 22, 23, 24, 25]. Recently, some researchers made attempts to discover the effect of cross-diffusion on the pattern formation, and found that with appropriate cross-diffusion coefcients, linear reaction terms are sufficient to produce pattern formation [26, 27, 28], but there is only few attention having been paid to this direction. Therefore, based on the model (1): First, we analyse criteria of linear stability and instability of the positive constant equilibrium \({\mathbf w_{c}}\) (see Theorem 1). Second, by applying the higher-order energy estimates, the embedding theorem and the Guo-Strauss' bootstrap technique (see Guo and Strauss [29]), it is proved that for given any general perturbation of magnitude \(\delta\), its nonlinear evolution is dominated by the corresponding linear dynamics along a fixed finite number of fastest growing modes, over a time period of \(\ln{\frac{1}{\delta}}\) (see Theorem 2). We assert further that the positive constant equilibrium point \({\mathbf w_{c}}\) is nonlinearly unstable in the above conditions (Corollary 1). Each initial perturbation certainly can behave drastically differently from another, which gives rise to the richness of patterns. Our results provide a quantitative characterization for the nonlinear evolution of early-stage spatiotemporal pattern formation in the model (1).

The organization of this paper is as follows: in Section 2, we first prove Turing instability does not take place in the absence of chemotactic effect. Second, we give linear stability and instability criterions for the model (1), and discuss some properties of solutions for the corresponding linearized system. In Section 3, we consider the growing modes of (1), and prove the Bootstrap lemma. In Section 4, quantitative characterization for pattern formation and proof of nonlinear instability are given.

2. Linear stability and instability criterions

In this section, we study in detail linear Stability,linear instability of positive constant equilibrium point \(\mathbf w_{c}=(1,\frac{\alpha}{\beta},\frac{\gamma}{\eta})\) to the model (1) in a \(d\)-dimensional box \(\Omega =\mathbb{T}^{d}=(0,\pi)^{d} (d=1,2,3)\), and \(\mathbf{g(u)}=\mu \mathbf{u_{1}(1-u_{1})}\).

2.1. Stability of positive constant equilibrium point for (1) without chemotaxis

We consider the stability of \({\mathbf w_{c}}\) for the corresponding system (1) without chemotaxis
\begin{equation}\label{d2} \begin{cases} \displaystyle u_{1t}=d_{1}\Delta{u_{1}}+\mu u_{1}(1-u_{1}),&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{2t}=d_{2}\Delta{u_{2}}+\alpha u_{1}-\beta u_{2},&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{3t}=d_{3}\Delta{u_{3}}+\gamma u_{1}- \eta u_{3},&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle \frac{\partial{u_{1}}}{\partial{x_{i}}}=\frac{\partial{u_{2}}}{\partial{x_{i}}}=\frac{\partial{u_{3}}}{\partial{x_{i}}}=0,&x_{i}=0,\pi, 1\leq i\leq d,\\ \displaystyle u_{1}(x,0)=u_{10}(x), u_{2}(x,0)=u_{20}(x), u_{3}(x,0)=u_{30}(x),&{\mathbf x}\in\mathbb{T}^{d} (d=1,2,3). \end{cases} \end{equation}
(3)
For sake convenience, take \(\mathbf{ w(x,t)= (U_{1}(x,t),U_{2}(x,t),U_{3}(x,t))^T}\) and \[\begin{array}{ll} G({w})=\left(% \begin{array}{ccc} g_{1}({w}) & \\ g_{2}({w}) &\\ g_{3}({w})) \end{array}% \right) = \left(% \begin{array}{ccc} \mu{u_{1}}(1-u_{1}) & \\ \alpha{u_{1}-\beta{u_{2}}} &\\ \gamma{u_{1}-\eta{u_{3}}}) \end{array}% \right), \end{array} \] then \[\frac{\partial{G}}{\partial{w}}|_{ w_{c}} \equiv{G_{w}({w_{c}})}= \left(% \begin{array}{ccc} -\mu & 0 & 0\\ \alpha & -\beta & 0 \\ \nu & 0 & -\eta \end{array} \right). \]

Lemma 1. The positive equilibrium point \(\mathbf w_{c}\) of (3) is locally asymptotically stable.

Proof. Let \(0=k_{1}< k_{2}< k_{3}< \cdot\cdot\cdot\) be the eigenvalues of the operator \(-\Delta\) on \(\mathbb{T}^{d}\) with the homogeneous Neumann boundary condition, and \(E(k_{i})\) be the eigenspace corresponding to \(k_{i}\) in \(H^{1}(\mathbb{T}^{d})\). Let \({\mathbf X}=[H^{1}(\mathbb{T}^{d})]^{3}\) and \({\mathbf X}_{ij}=\left\{c\cdot\phi_{ij} | c\in \mathbb{R}^{3}\right\}\), where \(\left\{\phi_{ij}, j=1,\cdot\cdot\cdot,\dim E(k_{i}) \right\} \) is an orthonormal basis of \(E(k_{i})\). Then \({\mathbf X}=\oplus_{i=1}^\infty {\mathbf X}_i\), \({\mathbf X}_i=\oplus_{j=1}^{\dim E(\mu_i)}{\mathbf X}_{ij}\). Let \(D= diag(d_{1}, d_{2}, d_{3})\). The linearization of (3) at \(\mathbf w_{c}\) is $$ {\mathbf{w}}_{t}=\left(D\Delta+\mathbf{G}_{\mathbf{w}}({\mathbf{w_{c}}})\right){\mathbf w}. $$ For each \(i\geq1\), \({\mathbf X}_i\) is invariant under the operator \(D\Delta+\mathbf{G}_{\mathbf{w}}({\mathbf{w_{c}}})\), and \(\lambda\) is an eigenvalue of this operator on \({\mathbf{X}}_i\) if and only if it is an eigenvalue of the matrix \( -k_{i}D+\mathbf{G}_{\mathbf{w}}({\mathbf{w_{c}}})\). The characteristic polynomial of \(-k_{i}D+\mathbf{G}_{\mathbf{w}}({\mathbf{w_{c}}})\) is given by $$\begin{array}{ll} det (\lambda{I}-( -k_{i}D+\mathbf{G}_{\mathbf{w}}({\mathbf{w_{c}}})))= \left(% \begin{array}{ccc} \lambda+ k_{i}d_{1} +\mu& 0 & 0\\ -\alpha & \lambda+ k_{i}d_{2}+\beta & 0 \\ -\gamma & 0 & \lambda+ k_{i}d_{3}+\eta \end{array} \right) = 0 \end{array} $$ implies \(\Psi(\lambda)= (\lambda+ k_{i}d_{1}+\mu)(\lambda+ k_{i}d_{2}+\beta)(\lambda+ k_{i}d_{3}+\eta)=0\), then \(\lambda_{1}=-(k_{i}d_{1}+\mu) \), \(\lambda_{2}=-(k_{i}d_{2}+\beta)\) and \(\lambda_{3} = -(k_{i}d_{3}+\eta)\). So all the eigenvalues are negative, hence \( \mathbf w_{c}\) is locally asymptotically stable, this complete the proof.

2.2. Criteria of linear stability and instability

Let \(\hat{u}_{1}({\mathbf x},t)=u_{1}({\mathbf x},t)-u_{1c}, \hat{u}_{2}({\mathbf x},t)=u_{2}({\mathbf x},t)-u_{2c}, \hat{u_{3}}({\mathbf x},t)=u_{3}({\mathbf x},t)-u_{3c}\) be nonlinear evolution of a perturbation around \((u_{1c},u_{2c},u_{3c})=(1,\frac{\alpha}{\beta},\frac{\nu}{\eta})\), and omitting the symbol \(`` \wedge"\), then we rewrite (3) with
\begin{equation}\label{d3} \begin{cases} \displaystyle u_{1t}=d_{1}\Delta{u_{1}}-\chi\Delta u_{2}+\xi\Delta u_{3}-\chi \nabla (u_{1}\nabla {u_{2}})+ \xi{ \nabla (u_{1}\nabla {u_{3}})}-\mu u_{1}(1+u_{1}),& {\mathbf x}\in\mathbb{T}^{d}\\ \displaystyle u_{2t}=d_{2}\Delta{u_{2}}+\alpha u_{1}-\beta u_{2}, & {\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{3t}=d_{3}\Delta{u_{3}}+\gamma u_{1}- \eta u_{3}, & {\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle \frac{\partial{u_{1}}}{\partial{x_{i}}}=\frac{\partial{u_{2}}}{\partial{x_{i}}}= \displaystyle \frac{\partial{u_{3}}}{\partial{x_{i}}}=0, & x_{i}=0,\pi, 1\leq i\leq d,\\ \displaystyle u_{1}(x,0)=u_{10}(x),u_{2}(x,0)=u_{20}(x),u_{3}(x,0)=u_{30}(x),& {\mathbf x}\in\mathbb{T}^{d}(d=1,2,3). \end{cases} \end{equation}
(4)
The corresponding linearized system can be written as
\begin{equation}\label{d4} \begin{cases} \displaystyle u_{1t}=d_{1}\Delta{u_{1}}-\chi\Delta u_{2}+\xi\Delta u_{3}-\mu u_{1} ,&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{2t}=d_{2}\Delta{u_{2}}+\alpha u_{1}-\beta u_{2},&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle u_{3t}=d_{3}\Delta{u_{3}}+\gamma u_{1}- \eta u_{3},&{\mathbf x}\in\mathbb{T}^{d}, t>0,\\ \displaystyle \frac{\partial{u_{1}}}{\partial{x_{i}}}=\frac{\partial{u_{2}}}{\partial{x_{i}}}=\frac{\partial{u_{3}}}{\partial{x_{i}}}=0,&x_{i}=0,\pi, 1\leq i\leq d,\\ \displaystyle u_{1}(x,0)=u_{10}(x), u_{2}(x,0)=u_{20}(x), u_{3}(x,0)=u_{30}(x),&{\mathbf x}\in\mathbb{T}^{d} (d=1,2,3). \end{cases} \end{equation}
(5)
Let \({\mathbf w}({\mathbf x},t)\equiv(u_{1}({\mathbf x},t),u_{2}({\mathbf x},t),u_{3}({\mathbf x},t))^{\mathrm{T}}\), \({\mathbf q}=(q_{1},\ldots,q_{d})\in \mathbb{N}^{d}\) and \(e_{{\mathbf q}}({\mathbf x})=\prod^{d}\limits_{i=1}\cos(q_{i}x_{i})\). Then \(\{e_{{\mathbf q}}({\mathbf x})\}_{{\mathbf q}\in\mathbb{N}^{d}}\) forms a basis of the space of functions in \(\mathbb{T}^{d}\) that satisfy the homogeneous Neumann boundary condition. We try to find a normal mode to the linearized system (5) of the following form
\begin{equation}\label{d5} {\mathbf w}({\mathbf x},t)\equiv{\mathbf r_{q}}e^{\lambda_{{\mathbf q}}t}e_{{\mathbf q}}({\mathbf x}), \end{equation}
(6)
where \({\mathbf r_{{\mathbf q}}}\) is a vector depending on \({\mathbf q}\). Substituting (6) into (5), we have \[ \lambda_{{\mathbf q}}{\mathbf r_{{\mathbf q}}}=\left(% \begin{array}{ccc} -d_{1}q^2-\mu & \chi q^2 & -\xi q^2\\ \alpha & -d_{2}q^2 -\beta & 0\\ \gamma &0 & -d_{3}q^2-\eta \end{array}% \right){\mathbf r_{{\mathbf q}}}:={\mathbf L_{q}}{\mathbf r_{{\mathbf q}}}, \] where \(q^2=|{\mathbf q}|^{2}=\sum^{d}\limits_{i=1}q^2_{i}\). Then the corresponding characteristic equation of \({\mathbf L_{q}}\) is
\begin{equation}\label{d6} \psi(\lambda_{{\mathbf q}})=\lambda^{3}_{{\mathbf q}}+\bar{B}_{2}\lambda^{2}_{{\mathbf q}} +\bar{B}_{1}\lambda_{{\mathbf q}}+\bar{B}_{0}=0, \end{equation}
(7)
where
\begin{equation}\label{d7} \begin{cases} \displaystyle \bar{B_{2}}=(d_{1}+d_{2}+d_{3})q^{2}+(\mu +\beta +\eta):=C_{21}q^2+C_{22},\\ \displaystyle\bar{B_{1}}=(d_{1}d_{2}+d_{1}d_{3}+d_{2}d_{3})q^{4}+[\mu{(d_{2}+d_{3})} +\beta(d_{1}+d_{3})+\eta(d_{1}+d_{2})-\alpha\chi\\ \displaystyle-\gamma\xi]q^{2}+(\mu\beta+\mu\eta+\beta\eta) :=C_{11}q^{4}+C_{12}q^{2}+C_{13},\\ \displaystyle \bar{B}_{0}:=C_{01}q^{6}+C_{02}q^{4}+C_{03}q^{2}+C_{04} \end{cases} \end{equation}
(8)
and
\begin{equation}\label{d8} \begin{cases} C_{01}:=d_{1}d_{2}d_{3},\\ C_{02}:=\beta d_{1}d_{3}+\eta d_{1}d_{3}+\mu d_{2}d_{3}-\alpha \chi d_{3}-\nu\xi d_{2},\\ C_{03}:=\beta d_{1}d_{2}+\eta d_{1}d_{3}+\mu d_{2}d_{3}-\eta\alpha\chi-\beta\nu\xi \\ C_{04}:=-\det({\mathbf G}_{\mathbf {w}}({\mathbf w_{c}}))=\mu\beta\eta. \end{cases} \end{equation}
(9)
In order to consider instability of \({\mathbf w_{c}}\), we make the following basic assumptions:
  • (H\(_{1}\)) There exists \({\textbf q}\in \mathbb{N}^{d}\) such that the matrix \({\textbf L_{q}}\) has at least one eigenvalue with positive real part;
  • (H\(_{2}\)) \(d_{1}, d_{2}, d_{3}>0\) and \(d_{i}\neq d_{j}\), \(i\neq j\), \(i,j=1,2,3\).
It is know that a first necessary condition for Turing instability to happen is that \(d_{i}\neq d_{j} (i\neq j)\), implying that \(u_{1}, u_{2}\) and \(u_{3}\) must move with different diffusion constants.
For every \(\lambda_{1}({\mathbf q}), \lambda_{2}({\mathbf q}), \lambda_{3}({\mathbf q})\) be the solutions of \(\det({ \lambda_{\textbf q}}\mathrm{I}-{\mathbf L_{q}})=0\). It will be state by Lemma 3 that there exist finitely many values \({\mathbf q}\in \mathbb{N}^{d}\) such that \[\max\left\{\mathrm{Re}\lambda_{1}({\mathbf q}), \mathrm{Re}\lambda_{2}({\mathbf q}), \mathrm{Re}\lambda_{3}({\mathbf q})\right\}>0.\] Hence there exists one \(q^{2}\) having the largest eigenvalue
\begin{equation}\label{d9} \lambda_{\max}= \max\limits_{{\mathbf q}\in \mathbb{N}^{d}}\max\limits_{1\leq i\leq3}\mathrm{Re}\lambda_{i}(q^{2})>0. \end{equation}
(10)
  • (H\(_{3}\)) At \(\mathbf{\overline{q}}=(\overline{q}_{1},\cdot\cdot\cdot,\overline{q}_{d})\in \mathbb{N}^{d}\) which attains \(\lambda_{\max}=\mathrm{Re}\lambda_{i}({\mathbf {\overline{q}}})\), we assume that the Jordan canonical form of the matrix \({\mathbf L_{\overline{q}}}={\mathbf G}_{\mathbf w}({\mathbf w_{c}})+{\mathbf Q}(\overline{q}^2)\) is \(J=\mathrm{diag}(\lambda_{1}({\mathbf{\overline{q}}}),\lambda_{2}({\mathbf{\overline{q}}}),\lambda_{3}({\mathbf{\overline{q}}}))\), where \(\overline{q}^2=\sum^{d}\limits_{i=1}\overline{q}^2_{i}\) and \[ {\mathbf Q}(\overline{q}^2):=\left(% \begin{array}{ccc} -d_{1}\overline{q}^2 &\chi\overline{q}^2 & -\xi\overline{q}^2\\ 0 & -d_{2}\overline{q}^2 & 0\\ 0 & 0 & -d_{3}\overline{q}^2 \end{array}% \right). \]
Let us carry on discussion on the characteristic equation (7). Denote \[A:=\bar{B}^{2}_{2}-3\bar{B}_{1}, B:=\bar{B}_{2}\bar{B}_{1}-9\bar{B}_{0}, C:=\bar{B}^{2}_{1}-3\bar{B}_{2}\bar{B}_{0}\] and \begin{eqnarray*}\Delta&=& B^{2}-4AC=3\left\{4\bar{B}^{3}_{1}+4\bar{B}^{3}_{2}\bar{B}_{0}+27\bar{B}^{2}_{0}-\bar{B}^{2}_{2}\bar{B}^{2}_{1}-18\bar{B}_{2}\bar{B}_{1}\bar{B}_{0}\right\}\\ &:=& Q_{6}q^{12}+Q_{5}q^{10}+Q_{4}q^{8}+Q_{3}q^{6}+Q_{2}q^{4}+Q_{1}q^{2}+Q_{0}, \end{eqnarray*} where \begin{eqnarray*} Q_{6}&=&3\left\{4C^{3}_{21}C_{01}+27C_{01}-C^{2}_{21}C^{2}_{11}-18C_{21}C_{11}C_{01}\right\},\\ Q_{5}&=&6\left\{27C_{01}C_{02}+2C^{3}_{21}C_{02}+6C^{2}_{21}C_{22}C_{01}-C^{2}_{21}C_{11}C_{12}-C_{21}C^{2}_{11}C_{22}\right.\\ &&\left.-9C_{21}C_{11}C_{02}-9C_{21}C_{12}C_{01}-9C_{22}C_{11}C_{01}\right\}, \end{eqnarray*} \begin{eqnarray*} Q_{4}&=&3\left\{27C_{02}+54C_{01}C_{03}+4C^{2}_{21}C_{03}+12C^{2}_{21}C_{22}C_{02}+12C_{21}C^{2}_{22}C_{01}+4C^{2}_{11}\right.\\ &&\left.-C^{2}_{21}C^{2}_{12}-2C_{11}C_{13}C^{2}_{21}-4C_{21}C_{22}C_{11}C_{03}-C^{2}_{22}C^{2}_{11}-18C_{21}C_{11}C_{03}\right.\\ &&\left.-18C_{21}C_{12}C_{02}-18C_{21}C_{13}C_{01}-18C_{22}C_{11}C_{02}-18C_{22}C_{12}C_{01}\right\},\\ Q_{3}&=&6\left\{27C_{01}C_{04}+27C_{02}C_{03}+2C^{3}_{21}C_{04}+12C^{3}_{22}C_{01}+6C^{2}_{21}C_{22}C_{03}\right.\\ &&\left.+6C_{21}C^{2}_{22}C_{02}+4C_{11}C_{12}-C^{2}_{21}C_{12}C_{13}-C_{21}C_{22}C^{2}_{12}-2C_{21}C_{22}C_{11}C_{13}\right.\\ &&\left.-C^{2}_{22}C_{11}C_{12}-9C_{21}C_{11}C_{04}-9C_{21}C_{12}C_{03}-9C_{21}C_{13}C_{02}\right.\\ &&\left.-9C_{22}C_{11}C_{03}-9C_{22}C_{12}C_{02}-9C_{22}C_{13}C_{01}\right\},\\ Q_{2}&=&3\left\{27C_{03}+54C_{02}C_{04}+4C^{3}_{22}C_{02}+12C^{2}_{21}C_{22}C_{04}+4C^{2}_{12}+12C_{21}C^{2}_{22}C_{03}\right.\\ &&\left.+8C_{11}C_{13}-C^{2}_{21}C^{2}_{13}-4C_{21}C_{22}C_{12}C_{13}-C^{2}_{22}C^{2}_{12}-2C^{2}_{22}C_{11}C_{13}-18C_{21}C_{12}C_{04}\right.\\ &&\left.-18C_{21}C_{13}C_{03}-18C_{22}C_{11}C_{04}-18C_{22}C_{13}C_{02}-18C_{22}C_{12}C_{03}\right\},\\ Q_{1}&=&6\left\{27C_{03}C_{04}+2C^{3}_{22}C_{03}+6C_{21}C^{2}_{22}C_{04}+4C_{12}C_{13}-C_{21}C_{22}C^{2}_{13}-C_{12}C_{13}C^{2}_{22}\right.\\ &&\left.-9C_{21}C_{13}C_{04}-9C_{22}C_{12}C_{04}-9C_{22}C_{13}C_{03}\right\},\\ Q_{0}&=&3\left\{ 4C^{3}_{22}C_{04}+4C^{2}_{13}+27C^{2}_{04}-C^{2}_{22}C^{2}_{13}-18C_{22}C_{13}C_{04}\right\}. \end{eqnarray*} The derivative of \(\psi(\lambda_{{\mathbf q}})\) is \(\psi'(\lambda_{{\mathbf q}})=3\lambda^{2}_{{\mathbf q}}+2\bar{B}_{2}\lambda_{{\mathbf q}}+\bar{B}_{1}.\) Obviously, equation \(\psi'(\lambda_{{\mathbf q}})=0\) has two roots as follows
\begin{eqnarray}\label{d10} \lambda^{\ast}_{1,2}({\mathbf q})Z&=&\frac{1}{3}\left(-\bar{B}_{2}\pm\sqrt{\bar{B}^{2}_{2}-3\bar{B}_{1}}\right)\notag\\ &=&\frac{1}{3}\left[-(C_{21}q^{2}+C_{22} \pm \sqrt{(C^{2}_{21}-3C_{11})q^{4}+(2C_{21}C_{22}-3C_{12})q^{2}+(C^{2}_{22}-3C_{13})} \right]\notag\\ &=&\frac{1}{3}\left[-(C_{21}q^{2}+C_{22})\pm\sqrt{(C_{21}q^{2}+C_{22})^{2}-3(C_{11}q^{4}+C_{12}q^{2}+C_{13})} \right]. \end{eqnarray}
(11)
Next, let us give one result concerning the cubic equation in Hu et al., [30] (which was first introduced in Fan [31]), which is used to discuss the linear stability and instability of positive constant equilibrium solution for the model (1).

Lemma 2. Let equation \(x^{3} + bx^{2} +cx + d = 0\), where \(b, c, d \in \mathbb{R}\). Let further \(A=b^{2}-3c, B=bc-9d, C=c^{2}-3bd\) and \(\Delta= B^{2 }- 4AC\). Then the equation has three real roots if and only if \(\Delta \leq 0\); the equation has one real root and a pair of conjugate complex roots if and only if \(\Delta>0\). Furthermore, the conjugate complex roots are \(w=\frac{-2b+Y^{1/3}_{1}+Y^{1/3}_{2}}{6}\pm i\frac{\sqrt{3}\left({Y^{1/3}_{1}-Y^{1/3}_{2}}\right)}{6}\), where \(Y_{1,2}=bA+\frac{3\left(-B\pm\sqrt{B^{2}-4AC}\right)}{2}.\)

According to Lemma 2, on the one hand, if \(\Delta\leq0\), then (7) has three real roots \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\), and denote \(\lambda_{1}({\mathbf q})\leq\lambda_{2}({\mathbf q})\leq\lambda_{3}({\mathbf q})\). From this, we further infer that \(\lambda^{\ast}_{1,2}({\mathbf q})\) also are real. Moreover, recall that \(\bar{B}_{2}=-(\lambda_{1}({\mathbf q})+\lambda_{2}({\mathbf q})+\lambda_{3}({\mathbf q}))>0\), it means that (7) has at least one eigenvalue with negative real part. On the other hand, if \(\Delta>0\), then Equation (7) has one real root \(\lambda_{1}({\mathbf q})\) and a pair of conjugate complex roots \[\lambda_{2,3}({\mathbf q})=\frac{-2\bar{B}_{2}+Y^{1/3}_{1}+Y^{1/3}_{2}}{6}\pm \mathrm{i}\frac{\sqrt{3}\left({Y^{1/3}_{1}-Y^{1/3}_{2}}\right)}{6}\] with \[Y_{1,2}=\bar{B}_{2}A+\frac{3\left(-B\pm\sqrt{B^{2}-4AC}\right)}{2}.\] Notice by the Routh-Hurwitz criterion that \({\mathbf q}=0\), in the case of \(C_{22}C_{13}>C_{04}\), then (8) has three negative roots. So we consider the case \({\mathbf q}\neq0\) in the sequel. In this section, our first main purpose is to give criteria for linear stability and instability of \({\mathbf w_{c}}\).

Theorem 1. (Linear stability and instability). Let \({\mathbf w_{c}}\) be positive constant equilibrium solution of (1). Assume that \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are three roots of \(\psi(\lambda)=\lambda^{3}+\bar{B}_{2}\lambda^{2}+\bar{B}_{1}\lambda+\bar{B}_{0}=0\), and that \(\lambda^{\ast}_{1}\) and \(\lambda^{\ast}_{2}\) are two roots of \(\psi'(\lambda)=3\lambda^{2}+2\bar{B}_{2}\lambda+\bar{B}_{1}=0\), then we have the following conclusions:

  • (1) If one of the following conditions holds, then \({\mathbf w_{c}}\) is linearly stable.
    • H\(_{S1}\) \(\Delta\leq0\), \(\bar{B}_{0}>0\) and \(\lambda^{\ast}_{1}< \lambda^{\ast}_{2}< 0\).
    • H\(_{S2}\) \(\Delta>0\), \(\bar{B}_{0}>0\) and the conjugate complex roots \(\lambda_{2}\), \(\lambda_{3}\) satisfy \(\mathrm{Re}\lambda_{2}< 0\), \(\mathrm{Re}\lambda_{3}< 0\).
  • (2) If one of the following conditions holds, then \({\mathbf w_{c}}\) is linearly unstable.
    • H\(_{U1}\) \(\Delta\leq0\), and one of the following conditions holds:
    • H\(_{U11}\) \(\bar{B}_{0}>0\) and \(\lambda^{\ast}_{2}>\lambda^{\ast}_{1}>0\).
    • H\(_{U12}\) \(\bar{B}_{0}>0\) and \(\lambda^{\ast}_{2}>0>\lambda^{\ast}_{1}\).
    • H\(_{U13}\) \(\bar{B}_{0}< 0\) and \(\lambda^{\ast}_{2}>0>\lambda^{\ast}_{1}\).
    • H\(_{U14}\)\(\bar{B}_{0}< 0\) and \(\lambda^{\ast}_{1}< \lambda^{\ast}_{2}< 0\).
    • H\(_{U2}\) \(\Delta >0\), and one of the following conditions holds:
    • H\(_{U21}\) \(\bar{B}_{0}>0\) and the conjugate complex roots \(\lambda_{2}\), \(\lambda_{3}\) satisfy \(\mathrm{Re}\lambda_{2}>0\), \(\mathrm{Re}\lambda_{3}>0\).
    • H\(_{U22}\) \(\bar{B}_{0}< 0\) and the conjugate complex roots \(\lambda_{2}\), \(\lambda_{3}\) satisfy \(\mathrm{Re}\lambda_{2}< 0\), \(\mathrm{Re}\lambda_{3}< 0\).
Here \(\Delta= B^{2}-4AC\), \(A:=\bar{B}^{2}_{2}-3\bar{B}_{1}\), \(B:=\bar{B}_{2}\bar{B}_{1}-9\bar{B}_{0}\), \(C:=\bar{B}^{2}_{1}-3\bar{B}_{2}\bar{B}_{0}\), in particular, \(\bar{B}_{0}=\psi(0)=-\lambda_{1}\lambda_{2}\lambda_{3}\).

Proof. Let \(\Delta\leq0\). By Lemma 2, the equation \(\psi(\lambda)=\lambda^{3}+\bar{B}_{2}\lambda^{2}+\bar{B}_{1}\lambda+\bar{B}_{0}=0\) has three real roots \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) and assume \(\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\). Moreover, the equation \(\psi'(\lambda)=3\lambda^{2}+2\bar{B}_{2}\lambda+\bar{B}_{1}=0\) has also two real roots \(\lambda^{\ast}_{1}\) and \(\lambda^{\ast}_{2}\) with \(\lambda^{\ast}_{1}\leq\lambda^{\ast}_{2}\), and \[ \begin{array}{ll} \psi'(\lambda)>0, \forall \lambda\in(-\infty, \lambda^{\ast}_{1})\cup(\lambda^{\ast}_{2}, +\infty),\\ \psi'(\lambda)< 0, \forall \lambda\in(\lambda^{\ast}_{1}, \lambda^{\ast}_{2}). \end{array} \] Therefore, \[\psi(\lambda^{\ast}_{1})\geq0, \psi(\lambda^{\ast}_{2})\leq0\] and \[\lambda_{1}\in(-\infty, \lambda^{\ast}_{1}], \lambda_{2}\in[\lambda^{\ast}_{1}, \lambda^{\ast}_{2}], \lambda_{3}\in[\lambda^{\ast}_{2},+\infty). \] Let condition ( H\(_{U11}\)) hold. If \(\lambda^{\ast}_{1}>0\), then \(\lambda_{2}>0\), \(\lambda_{3}>0\). Since \(\psi(\lambda)\) is increasing for all \(\lambda\in(-\infty, \lambda^{\ast}_{1}]\) and \(\psi(0)=\bar{B}_{0}>0\), one has \(\lambda_{1}< 0\). If \(\lambda_{1}>0\), this contradicts \(\bar{B}_{2}>0\). Hence, \({\mathbf w_{c}}\) is linearly unstable. Under the condition ( H\(_{U12}\)), if \(\lambda^{\ast}_{1}< 0\), \(\lambda^{\ast}_{2}>0\), then \(\lambda_{1}< 0\). Since \(\psi(\lambda)\) is decreasing for all \(\lambda\in(\lambda^{\ast}_{1}, \lambda^{\ast}_{2})\) and \(\bar{B}_{0}>0\), we have \(\lambda_{2}>0\), \(\lambda_{3}>0\). This means that \({\mathbf w_{c}}\) is linearly unstable.
Similarly, it is proved that when condition ( H\(_{U13}\)) or ( H\(_{U14}\)) holds, eigenvalues \(\lambda_{1}< 0\), \(\lambda_{2}< 0\) and \(\lambda_{3}>0\), that is, \({\mathbf w_{c}}\) is linearly unstable.
In the case ( H\(_{S1}\)), By monotonicity of \(\psi(\lambda)\) for all \(\lambda\in(\lambda^{\ast}_{2},+\infty)\), it holds \(\lambda_{1}< 0\), \(\lambda_{2}< 0\) and \(\lambda_{3}< 0\). Hence, \({\mathbf w_{c}}\) is linearly stable.
We now let \(\Delta>0\). In view of Lemma 2, \(\psi(\lambda)=0\) has one real root \(\lambda_{1}\) and a pair of conjugate complex roots \(\lambda_{2}\), \(\lambda_{3}\). If condition ( H\(_{U21}\)) holds, then it follows from \(\bar{B}_{0}>0\) that real root \(\lambda_{1}< 0\). Therefore, \({\mathbf w_{c}}\) is linearly unstable based on \(\mathrm{Re}\lambda_{2}>0\), \(\mathrm{Re}\lambda_{3}>0\). Similarly, we can also prove that if condition ( H\(_{U22}\)) holds, then \({\textbf w_{c}}\) is linearly unstable. If condition (H\(_{S2}\)) holds, it is easily to obtain that \({\textbf w_{c}}\) is linearly stable. This completes the proof.

2.3. Some properties of solutions of the linearized system (5)

Lemma 3. If \({\mathbf q}\in \mathbb{N}^{d}\) and \(q^2\) sufficiently large, then all eigenvalues of \({\textbf L_{q}}\) have negative real parts.

Proof. Notice that \(C_{21}\), \(C_{22}\), \(C_{11}\), \(C_{13}\), \(C_{01}\), \(C_{04}\) and \(\bar{B}_{2}\) are all positive, where the parameters are mentioned in (8) and (9). In addition, \(\bar{B}_{2}\), \(\bar{B}_{1}\), \(\bar{B}_{0}\) and \(\bar{B}_{2}\bar{B}_{1}-\bar{B}_{0}\) are positive if \({\mathbf q}\in \mathbb{N}^{d}\) sufficiently large. It is follows from the Routh-Hurwitz criterion that all eigenvalues of \({\mathbf L_{q}}\) have negative real parts for \({\mathbf q}\in \mathbb{N}^{d}\) sufficiently large.

For given \({\mathbf q}\in \mathbb{N}^{d}\), let \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\) be the eigenvalues of \({\mathbf L_{q}}\) and the corresponding eigenvectors by \({\mathbf r}_{1}({\mathbf q})\), \({\mathbf r}_{2}({\mathbf q})\), \({\mathbf r}_{3}({\mathbf q})\). According to eigenvectors, we divide \({\mathbf q}\) into the following four cases to analyze:
Case 1: \({\mathbf q}\in \mathbb{N}^{d}_{R1}\): \({\mathbf L_{q}}\) has three real eigenvalues \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\) and \(\lambda_{3}({\mathbf q})\), and three corresponding linearly independent eigenvectors \({\mathbf r}_{1}({\mathbf q})\), \({\mathbf r}_{2}({\mathbf q})\) and \({\mathbf r}_{3}({\mathbf q})\). In the case we arrange \(\lambda_{1}({\mathbf q})\leq\lambda_{2}({\mathbf q})\leq\lambda_{3}({\mathbf q})\).
Case 2: \({\mathbf q}\in \mathbb{N}^{d}_{R2}\): \({\mathbf L_{q}}\) has a single root \(\lambda_{1}({\mathbf q})=\lambda_{s}({\mathbf q})\) and a double root \(\lambda_{2}({\mathbf q})=\lambda_{3}({\mathbf q})=\lambda_{d}({\mathbf q})\) (or \({\mathbf L_{q}}\) has three repeated real root \(\lambda_{s}({\mathbf q})=\lambda_{d}({\mathbf q})\)), meanwhile, there are only two linearly independent real eigenvectors \({\mathbf r}_{s}({\mathbf q})\) and \({\mathbf r}_{d}({\mathbf q})\). In this case we need find another independent vector \({\mathbf r}'_{d}({\mathbf q})\) satisfying \[({\mathbf L_{q}}-\lambda_{d}({\mathbf q}) \mathrm{I}){\mathbf r}'_{d}({\mathbf q})={\mathbf r}_{d}({\mathbf q}). \] Case 3: \({\mathbf q}\in \mathbb{N}^{d}_{R3}\): (7) has a triple eigenvalue \(\lambda({\mathbf q})\) which only corresponding one linearly independent eigenvector \({\mathbf r}({\mathbf q})\). In this case, we need to supplement another two independent vectors \({\mathbf r}'({\mathbf q})\) and \({\mathbf r}''({\mathbf q})\), which satisfy \[({\mathbf L_{q}}-\lambda({\mathbf q}) \mathrm{I}){\mathbf r}'({\mathbf q})={\mathbf r}({\mathbf q}), ({\mathbf L_{q}}-\lambda({\mathbf q}) \mathrm{I}){\mathbf r}''({\mathbf q})={\mathbf r}'({\mathbf q}).\] Case 4: \({\mathbf q}\in\mathbb{N}^{d}_{C}=\mathbb{N}^{d}-(\mathbb{N}^{d}_{R1}\bigcup\mathbb{N}^{d}_{R2}\bigcup\mathbb{N}^{d}_{R3})\): The characteristic equation (7) has one real root and a pair of conjugate complex roots. The eigenvalues and the corresponding eigenvectors are denoted by \(\lambda_{r}({\mathbf q})\), \(\mathrm{Re}\lambda_{c}({\mathbf q})+i\mathrm{Im}\lambda_{c}({\mathbf q})\), \(\mathrm{Re}\lambda_{c}({\mathbf q})-i\mathrm{Im}\lambda_{c}({\mathbf q})\) and \({\mathbf r}({\mathbf q})\), \(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})+i\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\), \(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})-i\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\), respectively. Notice that \(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\) and \(\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\) are linearly independent vectors. Given any initial perturbation \({\mathbf w}({\mathbf x}, 0)\), it can be expressed as
\begin{eqnarray}\label{d11} {\mathbf w}({\mathbf x}, 0)&=&{\mathbf w}_{0}({\mathbf x})=\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}}{\mathbf w}_{{\mathbf q}}e_{{\mathbf q}}({\mathbf x})\notag\\ &=&\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R1}}[w_{1}({\mathbf q}){\mathbf r}_{1}({\mathbf q}) +w_{2}({\mathbf q}){\mathbf r}_{2}({\mathbf q})+w_{3}({\mathbf q}){\mathbf r}_{3}({\mathbf q})]e_{{\mathbf q}}({\mathbf x})\notag\\ && +\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R2}}[w_{d}({\mathbf q}){\mathbf r}_{d}({\mathbf q}) +w'_{d}({\mathbf q}){\mathbf r}'_{d}({\mathbf q})+w_{s}({\mathbf q}){\mathbf r}_{s}({\mathbf q})]e_{{\mathbf q}}({\mathbf x})\notag\\ &&+\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R3}}[w({\mathbf q}){\mathbf r}({\mathbf q})+w'({\mathbf q}){\mathbf r}'({\mathbf q})+w''({\mathbf q}){\mathbf r}''({\mathbf q})]e_{{\mathbf q}}({\mathbf x})\notag\\ &&+\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{C}}[w^{\mathrm{Re}}({\mathbf q})\mathrm{Re}{\mathbf r}_{c}({\mathbf q})+w^{\mathrm{Im}}({\mathbf q})\mathrm{Im}{\mathbf r}_{c}({\mathbf q})+w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q})]e_{{\mathbf q}}({\mathbf x}), \end{eqnarray}
(12)
where \(w_{i}({\mathbf q}), w_{d}({\mathbf q}), w'_{d}({\mathbf q}), w_{s}({\mathbf q}), w({\mathbf q}), w'({\mathbf q}), w''({\mathbf q}), w^{\mathrm{Re}}({\mathbf q}), w^{\mathrm{Im}}({\mathbf q}), w_{r}({\mathbf q})\in\mathbb{R}\), \(i=1,2,3\) and
\begin{equation}\label{d12} \begin{cases} \displaystyle{\mathbf w}_{{\mathbf q}}=w_{1}({\mathbf q}){\mathbf r}_{1}({\mathbf q})+w_{2}({\mathbf q}){\mathbf r}_{2}({\mathbf q})+w_{3}({\mathbf q}){\mathbf r}_{3}({\mathbf q}),&{\mathbf q}\in\mathbb{N}^{d}_{R1},\\ \displaystyle{\mathbf w}_{{\mathbf q}}=w_{d}({\mathbf q}){\mathbf r}_{d}({\mathbf q})+w'_{d}({\mathbf q}){\mathbf r}'_{d}({\mathbf q})+w_{s}({\mathbf q}){\mathbf r}_{s}({\mathbf q}),&{\mathbf q}\in\mathbb{N}^{d}_{R2},\\ \displaystyle{\mathbf w}_{{\mathbf q}}=w({\mathbf q}){\mathbf r}({\mathbf q})+w'({\mathbf q}){\mathbf r}'({\mathbf q})+w''({\mathbf q}){\mathbf r}''({\mathbf q}),&{\mathbf q}\in\mathbb{N}^{d}_{R3},\\ \displaystyle{\mathbf w}_{{\mathbf q}}=w^{\mathrm{Re}}({\mathbf q})\mathrm{Re}{\mathbf r}_{c}({\mathbf q})+w^{\mathrm{Im}}({\mathbf q})\mathrm{Im}{\mathbf r}_{c}({\mathbf q})+w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q}),&{\mathbf q}\in\mathbb{N}^{d}_{C}. \end{cases} \end{equation}
(13)
Thus, the unique solution \({\mathbf w}({\mathbf x}, t)\) to the linearized system (5) can be written in the following form.
\begin{eqnarray}\label{d13} {\mathbf w}({\mathbf x}, t)&=&\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R1}}\left[w_{1}({\mathbf q}){\mathbf r}_{1}({\mathbf q})e^{\lambda_{1}({\mathbf q})t}+w_{2}({\mathbf q}){\mathbf r}_{2}({\mathbf q})e^{\lambda_{2}({\mathbf q})t}+w_{3}({\mathbf q}){\mathbf r}_{3}({\mathbf q})e^{\lambda_{3}({\mathbf q})t}\right]e_{{\mathbf q}}({\mathbf x})\notag\\ && +\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R2}}\left\{\left[w_{d}({\mathbf q}){\mathbf r}_{d}({\mathbf q})+w'_{d}({\mathbf q})({\mathbf r}'_{d}({\mathbf q}) +{\mathbf r}_{d}({\mathbf q})t)\right]e^{\lambda_{d}({\mathbf q})t}+w_{s}({\mathbf q}){\mathbf r}_{s}({\mathbf q})e^{\lambda_{s}({\mathbf q})t}\right\}e_{{\mathbf q}}({\mathbf x})\notag\\ && +\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R3}}\left[w({\mathbf q}){\mathbf r}({\mathbf q})+w'({\mathbf q})\left({\mathbf r}'({\mathbf q})+{\mathbf r}({\mathbf q})t\right)+\ w''({\mathbf q})\left({\mathbf r}''({\mathbf q})+{\mathbf r}'({\mathbf q})t+{\mathbf r}({\mathbf q})t^{2}\right)\right]\notag\\ &&\times e^{\lambda({\mathbf q})t} e_{{\mathbf q}}({\mathbf x}) +\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{C}}\left\{\left[w^{\mathrm{Re}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] -\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right.\right.\notag\\ && \left.\left.+w^{\mathrm{Im}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] +\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right]e^{(\mathrm{Re}\lambda_{c}({\mathbf q}))t}\right.\notag\\ && +\left.w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q})e^{\lambda_{r}({\mathbf q})t}\right\}e_{{\mathbf q}}({\mathbf x})\notag\\ &:=&\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R1}}T_{R1}({\mathbf w_{q}})({\mathbf x},t)+\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R2}}T_{R2}({\mathbf w_{q}})({\mathbf x},t) +\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R3}}T_{R3}({\mathbf w_{q}})({\mathbf x},t) +\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{C}}T_{c}({\mathbf w_{q}})({\mathbf x},t)\notag\\ &\equiv& e^{\mathfrak{L}t}{\mathbf w}_{0}({\mathbf x}). \end{eqnarray}
(14)
Recall that \[\lambda_{\max}= \max\limits_{{\mathbf q}\in \mathbb{N}^{d}}\max\limits_{1\leq i\leq3} \mathrm{Re}\lambda_{i}({\mathbf q})>0,\] where \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\) are the solutions of (7). Denote
\begin{equation}\label{d14} {\mathbb{N}^{d}}_{\max}=\{{\mathbf q}\in \mathbb{N}^{d} | \mathrm{Re}\lambda_{i}({\mathbf q})=\lambda_{\max}, i=1,2,3 \}. \end{equation}
(15)
By the assumption ( H\(_{3}\)), the largest eigenvalue \(\lambda_{\max}\) can be obtained, provided that \({\mathbf q}\) belongs to \(\mathbb{N}^{d}_{R1}\) or \(\mathbb{N}^{d}_{C}\).
In the sequel, we define \[I=\{i | 1 \leq i \leq 3\}, I_{1}=\{i | \lambda_{i}({\mathbf q})=\lambda_{\max}, 1 \leq i \leq 3\},\] and \[ \begin{array}{ll} \Lambda_{R1}=\mathbb{N}^{d}_{R1}\cap{\mathbb{N}^{d}}_{\max}, \Lambda_{C}=\mathbb{N}^{d}_{C}\cap{\mathbb{N}^{d}}_{\max},\\ \Lambda_{C1}=\{{\mathbf q}\in\Lambda_{C} | \mathrm{Re}\lambda_{c}({\mathbf q})=\lambda_{\max}\},\\ \Lambda_{C2}=\{{\mathbf q}\in\Lambda_{C} | \lambda_{r}({\mathbf q})=\lambda_{\max}\},\\ \Lambda_{C3}=\{{\mathbf q}\in\Lambda_{C} | \mathrm{Re}\lambda_{c}({\mathbf q})=\lambda_{\max}, \lambda_{r}({\mathbf q})=\lambda_{\max}\}. \end{array} \] Let \(e^{\mathfrak{M}t}{\mathbf w}_{0}({\mathbf x})\) be the dominant part of the solution \(e^{\mathfrak{L}t}{\mathbf w}_{0}({\mathbf x})\) of the linearied system (5) and \begin{eqnarray*} e^{\mathfrak{M}t}{\mathbf w}_{0}({\mathbf x})&=&\sum\limits_{{\mathbf q}\in\Lambda_{R1}}\sum\limits_{i\in I_{1}}w_{i}({\mathbf q}){\mathbf r}_{i}({\mathbf q})e^{\lambda_{\max}t}e_{{\mathbf q}}({\mathbf x})\notag\\ && +\sum\limits_{{\mathbf q}\in\Lambda_{C1}}\left[w^{\mathrm{Re}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] -\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right.\notag\\ &&\left.+w^{\mathrm{Im}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] +\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right]e^{\lambda_{\max}t}\notag\\ && +\sum\limits_{{\mathbf q}\in\Lambda_{C2}}w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q})e^{\lambda_{\max}t}e_{{\mathbf q}}({\mathbf x})\notag \end{eqnarray*}
\begin{eqnarray}\label{d15} &&+\sum\limits_{{\mathbf q}\in\Lambda_{C3}}\left\{\left[w^{\mathrm{Re}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] -\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right.\right.\notag\\ &&\left.\left.+w^{\mathrm{Im}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] +\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right]\right.\notag\\ &&+\left.w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q})\right\}e^{\lambda_{\max}t}e_{{\mathbf q}}({\mathbf x}). \end{eqnarray}
(16)
Since \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\) are the roots of (7), let \(\beta_{i}({\mathbf q})=\frac{1}{q^{2}}\lambda_{i}({\mathbf q})\), then \(\beta_{1}({\mathbf q})\), \(\beta_{2}({\mathbf q})\), \(\beta_{3}({\mathbf q})\) are the three roots of \({\mathbf F}_{q}(\beta_{{\mathbf q}})=\det\left(\beta_{{\mathbf q}}\mathrm{I}-\frac{1}{q^{2}}{\mathbf L_{q}}\right)=0\) and \begin{eqnarray*} {\mathbf F}_{q}(\beta_{{\mathbf q}})&=&\det\left(% \begin{array}{ccc} \beta_{{\mathbf q}}+d_{1}+\frac{\mu}{q^2}& -\chi &\xi\\ -\alpha & \beta_{{\mathbf q}}+d_{2}+\frac{\beta}{q^2} & 0\\ -\gamma & 0&\beta_{{\mathbf q}}+d_{3}+\frac{\eta}{q^2} \end{array}% \right)\\ &=&\beta^{3}_{{\mathbf q}}+\bar{b}_{2}({\mathbf q})\beta^{2}_{{\mathbf q}}+\bar{b}_{1}({\mathbf q})\beta_{{\mathbf q}}+\bar{b}_{0}({\mathbf q}) \end{eqnarray*} with
\begin{equation}\label{d16} \begin{cases} \displaystyle\bar{b}_{2}({\mathbf q})=(d_{1}+d_{2}+d_{3})+\frac{1}{q^2}(\mu + \beta + \eta),\\ \displaystyle\bar{b}_{1}({\mathbf q})=(d_{1}d_{2}-\alpha\chi -\gamma\xi + d_{1}d_{3}+d_{2}d_{3}+\alpha\chi +\gamma\xi),\\ \displaystyle+\frac{1}{q^2}[\mu{(d_{2}+d_{3})} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\beta(d_{1}+d_{3})+\eta(d_{1}+d_{2})]+\frac{1}{q^4}\left(\mu\beta +\beta\eta +\mu\eta\right),\\ \displaystyle\bar{b}_{0}({\mathbf q})=d_{1}d_{2}d_{3}-\alpha\chi d_{3}-\gamma\xi d_{2}+\frac{1}{q^2}\left(\mu d_{2}d_{3}+\beta d_{1}d_{2}+\eta d_{1}d_{2} \right)+\frac{1}{q^4}\left[\mu\eta d_{2}+\beta\eta d_{2}+\mu\beta d_{3} \right]+\frac{\mu\beta\eta}{q^6}. \end{cases} \end{equation}
(17)
Moreover,
\begin{equation}\label{d17} \begin{cases} \lim\limits_{q^2\rightarrow\infty}\bar{b}_{2}({\mathbf q})=d_{1}+d_{2}+d_{3}:=\bar{b}_{2},\\ \lim\limits_{q^2\rightarrow\infty}\bar{b}_{1}({\mathbf q})=d_{1}d_{2}+d_{1}d_{3}+d_{2}d_{3}:=\bar{b}_{1},\\ \lim\limits_{q^2\rightarrow\infty}\bar{b}_{0}({\mathbf q})=d_{1}d_{2}d_{3}:=\bar{b}_{0}. \end{cases} \end{equation}
(18)
One can define a function \({\mathbf F}^{\ast}(\beta_{{\mathbf q}})\) of the form \[{\mathbf F}^{\ast}(\beta_{{\mathbf q}}):=\beta^{3}_{{\mathbf q}}+\bar{b}_{2}\beta^{2}_{{\mathbf q}}+\bar{b}_{1}\beta_{{\mathbf q}}+\bar{b}_{0} =(\beta_{{\mathbf q}}+d_{1})(\beta_{{\mathbf q}}+d_{2})(\beta_{{\mathbf q}}+d_{3}).\] It is clear from the assumption ({\textbf H\(_{2}\)}) that the equation \({\textbf F}^{\ast}(\beta_{{\mathbf q}})=0\) has different negative roots \(-d_{1}\), \(-d_{2}\), \(-d_{3}\). For \(q^{2}\) sufficiently large, it follows from Lemma 3 that \(\mathrm{Re}\beta_{i}({\mathbf q})< 0\), \(\forall 1\leq i\leq 3\). Thus
\begin{equation}\label{d18} 0>\mathrm{Re}\beta_{i}({\mathbf q})>\sum^{3}_{j=1}\mathrm{Re}\beta_{j}({\mathbf q})=-\mathrm{Re}\bar{b}_{2}({\mathbf q}) \end{equation}
(19)
and
\begin{equation}\label{d19} \bar{b}_{1}({\mathbf q})=\beta_{1}({\mathbf q})\beta_{2}({\mathbf q})+\beta_{1}({\mathbf q})\beta_{3}({\mathbf q})+\beta_{2}({\mathbf q})\beta_{3}({\mathbf q}) \geq (\mathrm{Im}\beta_{i}({\mathbf q}))^{2}. \end{equation}
(20)
For \(q^{2}\) large enough, by (18) and (19), we have
\begin{equation}\label{d20} 0>\mathrm{Re}\beta_{i}({\mathbf q})>-\bar{b}_{2}-1>-\infty. \end{equation}
(21)
Again combining (18) and (20) yields for \(q^{2}\) sufficiently large
\begin{equation}\label{d21} |\mathrm{Im}\beta_{i}({\mathbf q})|< \sqrt{\bar{b}_{1}+1}< +\infty. \end{equation}
(22)
Applying (21) and (22), for every sequence \(\{{\mathbf q}_{m}\}\in \mathbb{N}^{d}\), there exists a subsequence of \(\{{\mathbf q}_{n}\}\) such that for \(1 \leq i\leq 3\) there exist limits \[\lim\limits_{n\rightarrow\infty}\mathrm{Re}\beta_{i}({\mathbf q}_{n}), \lim\limits_{n\rightarrow\infty}\mathrm{Im}\beta_{i}({\mathbf q}_{n}).\] Hence
\begin{equation}\label{d22} \lim\limits_{n\rightarrow\infty}\beta_{i}({\mathbf q}_{n})=\beta_{i}\in\mathbb{C}. \end{equation}
(23)
Notice by (18) and (23) that
\begin{equation}\label{d23} \begin{cases} -(\beta_{1}+\beta_{2}+\beta_{3})=\bar{b}_{2}=d_{1}+d_{2}+d_{3},\\ \beta_{1}\beta_{2}+\beta_{1}\beta_{3}+\beta_{2}\beta_{3}=\bar{b}_{1}=d_{1}d_{2}+d_{1}d_{3}+d_{2}d_{3},\\ -\beta_{1}\beta_{2}\beta_{3}=\bar{b}_{0}=d_{1}d_{2}d_{3}. \end{cases} \end{equation}
(24)
This means that \(\{\beta_{1},\beta_{2},\beta_{3}\}\) is a permutation of \(\{-d_{1},-d_{2},-d_{3}\}\). So for every sequence \(\{{\mathbf q}_{n}\}\in\mathbb{N}^{d}\), there exists a subsequence \(\{{\mathbf q}_{n_{j}}\}\) such that \[\lim\limits_{j\rightarrow\infty}\beta_{i}({\mathbf q}_{n_{j}})=\beta_{i}.\] Hence we can assume that \[ \lim\limits_{q^2\rightarrow\infty}\beta_{i}({\mathbf q})=-d_{i}, \forall 1 \leq i \leq3, \] or equivalently
\begin{equation}\label{d24} \lim\limits_{q^2\rightarrow\infty}\frac{1}{q^{2}}\lambda_{i}({\mathbf q})=-d_{i}, \forall 1 \leq i \leq3. \end{equation}
(25)
Using the similar arguments of Lemma 4 in Hoang [1], the following lemma can be derived.

Lemma 4. If \({\mathbf q}\in \mathbb{N}^{d}\) and \(q^2\) sufficiently large, then \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\) are real numbers and \(\lambda_{i}({\mathbf q})\neq\lambda_{j}({\mathbf q})\), \(i\neq j, i,j=1,2,3\).

Proof. It follows from the assumptions ({\textbf H\(_{2}\)}) and (25) that \(\mathrm{Re}\lambda_{i}({\mathbf q})\neq\mathrm{Re}\lambda_{j}({\mathbf q}), i\neq j.\) If there exists a sequence \(\{{\mathbf q}_{n}\}\in\mathbb{N}^{d}\) such that the sequence \(\lambda_{i_{n}}({\mathbf q}_{n})\notin\mathbb{R}\), then we can choose a subsequence \(\{n_{m}\}\) of \(\{n\}\) and an integer \(j, 1 \leq j\leq 3\) such that \(i_{n_{m}}\equiv j\). Hence \[\lim\limits_{q^2_{n_{m}}\rightarrow\infty}\frac{1}{q^{2}_{n_{m}}}\lambda_{j}({\mathbf q}_{n_{m}})=-d_{j},\] and \[\lim\limits_{q^2_{n_{m}}\rightarrow\infty}\frac{1}{q^{2}_{n_{m}}}\overline{\lambda_{j}({\mathbf q}_{n_{m}})}=-d_{j},\] where \(\overline{\lambda_{j}({\mathbf q}_{n_{m}})}\) is the complex conjugation of \(\lambda_{j}({\mathbf q}_{n_{m}})\). Notice that \(\overline{\lambda_{1}({\mathbf q}_{n_{m}})}\in\left\{\lambda_{2}({\mathbf q}_{n_{m}}), \lambda_{3}({\mathbf q}_{n_{m}})\right\}\), \(\overline{\lambda_{2}({\mathbf q}_{n_{m}})}\in\left\{\lambda_{1}({\mathbf q}_{n_{m}}), \lambda_{3}({\mathbf q}_{n_{m}})\right\}\) and \(\overline{\lambda_{3}({\mathbf q}_{n_{m}})}\in\left\{\lambda_{1}({\mathbf q}_{n_{m}}), \lambda_{2}({\mathbf q}_{n_{m}})\right\}\), then there exists a subsequence of \(\{n_{m}\}\), still denoted by \(\{n_{m}\}\) and \(1 \leq l\leq 3, l\neq j\) such that \(\overline{\lambda_{j}({\mathbf q}_{n_{m}})}=\lambda_{l}({\mathbf q}_{n_{m}})\), one can obtain \[-d_{j}=\lim\limits_{q^2_{n_{m}}\rightarrow\infty}\frac{1}{q^{2}_{n_{m}}}\overline{\lambda_{j}({\mathbf q}_{n_{m}})} =\lim\limits_{q^2_{n_{m}}\rightarrow\infty}\frac{1}{q^{2}_{n_{m}}}\lambda_{l}({\mathbf q}_{n_{m}})=-d_{l}, \forall m\in \mathbb{N}.\] So \(d_{j}=d_{l}\) and \(j\neq l\), in contradiction to the assumption ({\textbf H\(_{2}\)}). Therefore, for \(q^{2}\) sufficiently large \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\) are real numbers, and we deduce by \(\mathrm{Re}\lambda_{i}({\mathbf q})\neq\mathrm{Re}\lambda_{j}({\mathbf q})\) that \(\lambda_{i}({\mathbf q})\neq\lambda_{j}({\mathbf q})\) whenever \(i\neq j\), which completes the proof.

3. Growing modes and Bootstrap lemma

3.1. Growing modes in the model (1)

For convenience we will always denote universal positive constants depending on \(d_{i}\), \(\chi\), \(\xi\), \(\mu\), \(\alpha\), \(\beta\), \( \gamma \), \(\eta\) \((i=1,2,3)\) by \(C_{k}(k=1,2,\cdot\cdot\cdot)\). Norm in \(L^{2}(\mathbb{T}^{d})\) is denoted by \(\|\cdot\|\).

Lemma 5. Suppose that ({\textbf H\(_{1}\)}) and ({\textbf H\(_{3}\)}) hold, and \({\mathbf w}({\mathbf x},t)\equiv{e^{\mathfrak{L}t}{\mathbf w}_{0}({\mathbf x})\) is a solution to the linearized system (5) with initial condition \({\mathbf w}_{0}({\mathbf x})\). Then there exists a constant \(\hat{C}_{1}>0\) depending on \(d_{i}\), \(\chi\), \(\xi\), \(\mu\), \(\alpha\), \(\beta\), \( \gamma\), \(\eta\) \((i=1,2,3)\) such that}

\begin{equation}\label{u(t)} \|{\mathbf w}({\mathbf \cdot},t)\|\leq{\hat{C}_{1}e^{\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|}, \forall t\geq0. \end{equation}
(26)

Proof. We will proceed in the following two cases.
Case 1: For \({\mathbf t\geq 0}\) ,\({\mathbf q}\in \mathbb{N}^{d}\), \(q^{2}\) sufficiently large. By Lemma 4, for \(q^{2}\) sufficiently large, the matrix \({\mathbf L_{q}}\) has three distinct eigenvalues \(\lambda_{1}({\mathbf q})\), \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\) and the corresponding linearly independent eigenvectors \({\mathbf r}_{1}({\mathbf q})\), \({\mathbf r}_{2}({\mathbf q})\), \({\mathbf r}_{3}({\mathbf q})\). We first look for eigenvector \({\mathbf r}_{1}({\mathbf q})\) such that \[{\mathbf r}_{1}({\mathbf q})=(1, r_{12}({\mathbf q}), r_{13}({\mathbf q}))^{\mathrm{T}},\] where \(r_{12}({\mathbf q})\), \(r_{13}({\mathbf q})\) are the solutions of the linear system \[\begin{array}{ll} (-d_{2}q^2-\beta -\lambda_{1}({\mathbf q}))r_{12}({\mathbf q})+ 0=-\alpha,\\ 0+(-d_{3}q^2-\eta -\lambda_{1}({\mathbf q}))r_{13}({\mathbf q})=-\gamma. \end{array} \] \[\begin{array}{ll} r_{12}({\mathbf q})=\frac{\alpha}{(d_{2}q^2+\beta +\lambda_{1}({\mathbf q}))}, \end{array} \] \[\begin{array}{ll} r_{13}({\mathbf q})=\frac{\gamma}{(d_{3}q^2+\eta +\lambda_{1}({\mathbf q}))}, \end{array} \] \[\begin{array}{ll} \displaystyle\lim\limits_{q^2\rightarrow\infty}r_{12}=0,\\ \displaystyle\lim\limits_{q^2\rightarrow\infty}r_{13}=0,\end{array}\] hence

\begin{equation}\label{d26} \begin{array}{ll} \displaystyle\lim\limits_{q^2\rightarrow\infty}\mathbf r_{1}= (1,0,0)^{\mathrm{T}}. \end{array} \end{equation}
(27)
Let \({\mathbf r}_{2}({\mathbf q})=(r_{21}({\mathbf q}), 1, r_{23}({\mathbf q}))^{\mathrm{T}}\), \({\mathbf r}_{3}({\mathbf q})=(r_{31}({\mathbf q}), r_{32}({\mathbf q}), 1)^{\mathrm{T}}\) be eigenvectors corresponding to the eigenvalues \(\lambda_{2}({\mathbf q})\), \(\lambda_{3}({\mathbf q})\), respectively. Then \[\lim\limits_{q^2\rightarrow\infty}r_{21}({\mathbf q})q^{2}=\frac{\chi}{(d_{2}-d_{1})}, \lim\limits_{q^2\rightarrow\infty}r_{23}({\mathbf q})q^{2}=0,\] and \[\lim\limits_{q^2\rightarrow\infty}r_{31}({\mathbf q})q^{2}=\frac{-\xi}{(d_{3}-d_{1})}, \lim\limits_{q^2\rightarrow\infty}r_{32}({\mathbf q})q^{2}=0.\] Therefore
\begin{equation}\label{d27} \lim\limits_{q^2\rightarrow\infty}{\mathbf r}_{2}({\mathbf q})=\left(\frac{\chi}{(d_{2}-d_{3})}, 1, 0\right)^{\mathrm{T}}, \lim\limits_{q^2\rightarrow\infty}{\mathbf r}_{3}({\mathbf q})=\left(\frac{-\xi}{(d_{1}-d_{3})}, 0, 1\right)^{\mathrm{T}}. \end{equation}
(28)
By (27) and (28), we deduce that there exists a constant \(C_{1}>0\) such that
\begin{equation}\label{d28} |{\mathbf r}_{i}({\mathbf q})|\leq C_{1}, \forall {\mathbf q}\in\Omega, i=1,2,3. \end{equation}
(29)
For \(q^{2}\) sufficiently large, it is follows from (13) that \({\mathbf w}_{{\mathbf q}}=\sum\limits^{3}_{i=1}w_{i}({\mathbf q}){\mathbf r}_{i}({\mathbf q}).\) Based on Cramer's Rule and Hadamard inequality, we have
\begin{equation}\label{d29} \begin{cases} \displaystyle|w_{1}({\mathbf q})| \leq\frac{|{\mathbf r}_{2}({\mathbf q})|\times|{\mathbf r}_{3}({\mathbf q})|\times|{\mathbf w}_{{\mathbf q}}|}{|\det[{\mathbf r}_{1}({\mathbf q}), {\mathbf r}_{2}({\mathbf q}), {\mathbf r}_{3}({\mathbf q})]|},\\ \displaystyle |w_{2}({\mathbf q})| \leq\frac{|{\mathbf r}_{1}({\mathbf q})|\times|{\mathbf r}_{3}({\mathbf q})|\times|{\mathbf w}_{{\mathbf q}}|}{|\det[{\mathbf r}_{1}({\mathbf q}), {\mathbf r}_{2}({\mathbf q}), {\mathbf r}_{3}({\mathbf q})]|},\\ \displaystyle |w_{3}({\mathbf q})| \leq\frac{|{\mathbf r}_{1}({\mathbf q})|\times|{\mathbf r}_{2}({\mathbf q})|\times|{\mathbf w}_{{\mathbf q}}|}{|\det[{\mathbf r}_{1}({\mathbf q}), {\mathbf r}_{2}({\mathbf q}), {\mathbf r}_{3}({\mathbf q})]|}. \end{cases} \end{equation}
(30)
In terms of (27) and (28), one can obtain
\begin{equation}\label{d30} \lim\limits_{q^2\rightarrow\infty}\det[{\mathbf r}_{1}({\mathbf q}), {\mathbf r}_{2}({\mathbf q}), {\mathbf r}_{3}({\mathbf q})]=1. \end{equation}
(31)
Applying (30) and (31) yields
\begin{equation}\label{d31} |w_{i}({\mathbf q})|\leq C_{2}|{\mathbf w}_{{\mathbf q}}|, \forall {\mathbf q}\in\Omega, i=1,2,3, \end{equation}
(32)
where \( C_{2}:=\max\left\{1, \sqrt{(\frac{\chi}{d_{2}-d_{1}})^{2}+1}, \sqrt {(\frac{\xi}{d_{2}-d_{3}})^{2}+1}\right\}>0\). Then, using (29), (32) and \( \lambda_{i}({\mathbf q})\leq \lambda_{\max}\), this shows that for \(q^{2}\) sufficiently large there exists a constant \(C_{3}>0\) independent of \({\mathbf q}\) such that \[\left|w_{i}({\mathbf q}){\mathbf r}_{i}({\mathbf q})e^{\lambda_{i}({\mathbf q})t}\right|\leq C_{1}C_{2}e^{\lambda_{\max}t}|{\mathbf w}_{{\mathbf q}}|,\] which leads to
\begin{equation}\label{d32} \left\|\sum\limits^{3}_{i=1}w_{i}({\mathbf q}){\mathbf r}_{i}({\mathbf q})e^{\lambda_{i}({\mathbf q})t}e_{{\mathbf q}}({\mathbf x})\right\|^{2}\leq 9C^{2}_{3}\left(\frac{\pi}{2}\right)^{d}e^{2\lambda_{\max}t}|{\mathbf w}_{{\mathbf q}}|^{2}. \end{equation}
(33)
Case 2: For \(t\leq1\). It is sufficiently to derive standard estimate in \({\mathbf L^{2}}\). From Neumann boundary condition , we can multiplying the first equation in (6) by \(u_{1}\), the second equation by \(k u_{2}\) and the third by \(u_{3}\), adding them together, and integrating the result in \(\mathbb{T}^{d}\), we have \[\begin{array}{ll} \displaystyle\frac{1}{2}\frac{d}{dt}\int_{\mathbb{T}^{d}}\{|u_{1}|^2+k|u_{2}|^2+|u_{3}|^2\}{\mathbf d x}+\int_{\mathbb{T}^{d}}\{d_{1}|\nabla{u+{1}}|^2+kd_{2}|\nabla{u_{2}}|^2+d_{3}|\nabla{u_{3}}|^2- \chi (\nabla{u_{1}}\nabla{u_{2}}) \displaystyle+\xi (\nabla{u_{1}}\nabla{u_{3}})\}{\mathbf d x}\\ \displaystyle=-\mu\int_{\mathbb{T}^{d}}u_{1}^2{\mathbf d x}-k\beta\int_{\mathbb{T}^{d}}u_{2}^2{\mathbf d x}- \eta\int_{\mathbb{T}^{d}}u_{3}^2{\mathbf d x}+\alpha k\int_{\mathbb{T}^{d}}u_{1}u_{2}{\mathbf d x}+\gamma \int_{\mathbb{T}^{d}}u_{1}u_{3}{\mathbf d x}. \end{array} \] where \(k=\frac{\chi^{2}d_{3}}{d_{1}d_{2}d_{3}+d_{2}\xi^{2}}\). Then the integrand of the second integral can be estimated as follows
\begin{eqnarray} && d_{1}|\nabla{u+{1}}|^{2}+kd_{2}|\nabla{u_{2}}|^{2}+d_{3}|\nabla{u_{3}}|^{2}-\chi (\nabla{u_{1}}\nabla{u_{2}})+\xi (\nabla{u_{1}}\nabla{u_{3}})\notag \\ && \geq\frac{d_{1}}{2}|\nabla{u_{1}}|^{2}+\frac{kd_{2}}{2}|\nabla{u_{2}}|^{2}+\frac{3d_{3}}{2}|\nabla{u_{3}}|^{2}\geq 0. \end{eqnarray}
(34)
Using Young inequality, we deduce that
\begin{eqnarray} && -\mu\int_{\mathbb{T}^{d}}u_{1}^2{\mathbf{dx}}-k\beta\int_{\mathbb{T}^{d}}u_{2}^2{\mathbf{dx}}- \eta\int_{\mathbb{T}^{d}}u_{3}^2{\mathbf{dx}}+\alpha k\int_{\mathbb{T}^{d}}u_{1}u_{2}{\mathbf{dx}}+\gamma \int_{\mathbb{T}^{d}}u_{1}u_{3}{\mathbf{dx}}\notag\\ && \leq (-\mu+\frac{k\alpha^{2}}{2\beta}+\frac{\nu^2}{2\eta})|u_{1}|^{2}-\frac{k\beta}{2}|u_{2}|^{2}-\frac{\eta}{2}|u_{3}|^{2}\notag\\ && \leq \max(-\mu+\frac{k\alpha^{2}}{2\beta}+\frac{\gamma^2}{2\eta},-\frac{\beta}{2},\frac{\eta}{2})\int_{\mathbb{T}^{d}}(|u_{1}|^2+ k|u_{2}|^2+|u_{3}|^2){\mathbf d x}. \end{eqnarray}
(35)
Then \[\begin{array}{ll} \displaystyle\frac{1}{2}\frac{d}{dt}\int_{\mathbb{T}^{d}}\{|u_{1}|^2+k|u_{2}|^2+|u_{3}|^2\}{\mathbf d x}\leq \displaystyle\max(-\mu+\frac{k\alpha^{2}}{2\beta}+\frac{\gamma^2}{2\eta},-\frac{\beta}{2}, \frac{\eta}{2})\int_{\mathbb{T}^{d}}(|u_{1}|^2+ k|u_{2}|^2+|u_{3}|^2){\mathbf d x}. \end{array}\] By Grownwall inequality, we can obtain \(\|{\mathbf w}({\mathbf \cdot},t)\|\leq{\hat{C}_{1}e^{\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|},\) where \(\hat{C}_{1}= \max(-\mu+\frac{k\alpha^{2}}{2\beta}+\frac{\gamma^2}{2\eta},-\frac{\beta}{2},\frac{\eta}{2})\). This completes the proof.

3.2. Bootstrap lemma and \(H^{2}\)-estimate in the model (1)

Denote \[\partial_{ x_{i} x_{j}}u=\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}, \partial_{ x_{i}}u=\frac{\partial u}{\partial x_{i}}, D^\alpha u=\frac{\partial^{|\alpha|}u}{\partial x^{\alpha_1}_{1}\cdots\partial x^{\alpha_d}_{d}},\] where \(\alpha=(\alpha_1,\cdots,\alpha_d)\), \(|\alpha|=\sum^{d}\limits_{i=1}\alpha_i\), \(i,j=1,\cdots,d\). Let us introduce
\begin{equation}\label{f3} \begin{array}{ll} \displaystyle k=\frac{\chi^{2}d_{3}}{d_{1}d_{2}d_{3}+d_{2}\xi^{2}} \end{array} \end{equation}
(36)
By standard theory of parabolic equation, we can establish the existence of local solutions for the model (4).

Lemma 6. (Local existence). For \(s\geq1 (d=1)\) and \(s\geq2 (d=2,3)\), there exist a \(T_{0}>0\) such that the problem (4) with \(u_{1}(\cdot,0), u_{2}(\cdot,0), u_{3}(\cdot,0)\in{H^{s}(\mathbb{T}^{d})}\) has a unique solution \({\mathbf w}(\cdot,t)\) on \((0,T_{0})\) which satisfies \[ \|{\mathbf w}(t)\|_{H^{s}(\mathbb{T}^{d})}\leq{C}{\|{\mathbf w}(0)\|_{H^{s}(\mathbb{T}^{d})}}, \] where \(C\) is a positive constant depending on \(d_{i}, \xi, \chi, \alpha, \beta, \gamma, \eta (i=1,2,3)\).

Lemma 7. Let \({\mathbf w}({\mathbf x},t)=(u_{1}({\mathbf x},t),u_{2}({\mathbf x},t),v({\mathbf x},t))^{\mathrm{T}}\) be a solution of the nonlinear perturbation systemare the generic constants > (3). Then \begin{eqnarray*} &&\frac{1}{2}\frac{d}{dt}\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}\left\{|D^{\alpha}u_{1}|^2+k|D^{\alpha}u_{2}|^2 +|D^{\alpha}u_{3}|^2\right\}d{\mathbf x}\\ &&+\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}\left\{\frac{d_{1}}{4}|\nabla (D^{\alpha}u_{1})|^2+\frac{kd_{2}}{2}|\nabla (D^{\alpha}u_{2})|^2 +\frac{3d_{3}}{2}|\nabla( D^{\alpha}u_{3})|^2\right\}d{\mathbf x}\\ &&+\frac{\beta{k}}{2}\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}|D^{\alpha}u_{2}|^2d{\mathbf x}+\frac{\eta}{2}\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}|D^{\alpha}u_{3}|^2d{\mathbf x}\\ &&\leq\hat{C}_2\|{\mathbf w}\|_{H^2(\mathbb{T}^{d})}\|\nabla^3{\mathbf w}\|^2+\hat{C}_{3}\|u_{1}\|^2, \end{eqnarray*} where \(\hat{C}_{2}\) and \(C_{0}\) are the generic constants and \(\hat{C}_{3}=(\frac{\alpha^{2}\eta k +\nu ^{2}}{8 \beta\eta a^{2}})c_{0}\).

Proof. Let \({\mathbf w}({\mathbf x},t)\) be a solution of (4). It is not hard to verify that if \({\tilde{\mathbf w}}({\mathbf x},t)=(\tilde{u}_{1}({\mathbf x},t),\tilde{u}_{2}({\mathbf x},t),\tilde{u}_{3}({\mathbf x},t))^{\mathrm{T}}\) is the even extension of \({\mathbf w}({\mathbf x},t)\) on \(2\mathbb{T}^{d}=(-\pi,\pi)^{d} (d=1,2,3)\). The \({\tilde{\mathbf w}}({\mathbf x},t)\) is also the solution of (4) with the homogeneous Neumann boundary conditions and periodical boundary conditions on \(2\mathbb{T}^{d}\). Therefore,

\begin{eqnarray} && \frac{1}{2}\frac{d}{dt}\int_{2\mathbb{T}^{d}}\left[|\partial_{x_{i}x_{j}}\tilde{u}_{1}|^2+k|\partial_{x_{i}x_{j}}\tilde{u}_{2}|^2 +|\partial_{x_{i}x_{j}}\tilde{u}_{3}|^2\right]d{\mathbf x} +\int_{2\mathbb{T}^{d}}\left[d_{1}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})|^{2}+kd_{2}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{2})|^{2} +d_{3}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{3})|^{2}\right.\notag\\ && \left.-\chi\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\cdot\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{2}) +\xi\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\cdot\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{3})\right]d{\mathbf x}\notag\\ && +\mu\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{1}|^{2}d{\mathbf x} +k\beta\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{2}|^{2}d{\mathbf x} +\eta\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{3}|^{2}d{\mathbf x}\notag\\ &&=\int_{2\mathbb{T}^{d}}\left[\chi\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\cdot\partial_{x_{i}x_{j}}(\tilde{u}_{1}\nabla \tilde{u}_{2}) -\xi\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\cdot\partial_{x_{i}x_{j}}(\tilde{u}_{1}\nabla \tilde{u}_{3})\right]d{\mathbf x}\notag\\ && +\alpha k\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{1}\cdot\partial_{x_{i}x_{j}}\tilde{u}_{2}d{\mathbf x} +\gamma\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{1}\cdot\partial_{x_{i}x_{j}}\tilde{u}_{3}d{\mathbf x} -2\mu\int_{2\mathbb{T}^{d}}\left[u_{1}|\partial_{x_{i}x_{j}}\tilde{u}_{1}|^{2} +|\partial_{x_{i}}\tilde{u}_{1}||\partial_{x_{j}}\tilde{u}_{1}||\partial_{x_{i}x_{j}}\tilde{u_{1}}|\right]d{\mathbf x}\notag\\ &&:=J_{1}+J_{2}+J_{3}+J_{4}. \end{eqnarray}
(37)
Using Young inequality, we get
\begin{eqnarray} &&\left[d_{1}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})|^{2}+kd_{2}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{2})|^{2} +d_{3}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{3})|^{2}\right. \left.-\chi\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\cdot\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{2}) +\xi\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\cdot\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{3})\right]\notag\\ &&\geq \frac{d_{1}}{2}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})|^{2}+\frac{kd_{2}}{2}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{2})|^{2} +\frac{3d_{3}}{2}|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{3})|^{2}. \end{eqnarray}
(38)
The nonlinear term \(J_1\) is bounded by
\begin{eqnarray} J_{1}&\leq& \chi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\partial_{x_{i}x_{j}}\tilde{u}_{1}\cdot\nabla \tilde{u}_{2}|d{\mathbf x} +\chi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\partial_{x_{j}}\tilde{u}_{1}\cdot\nabla (\partial_{x_{i}}\tilde{u}_{2})|d{\mathbf x}\notag\\ && +\chi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\partial_{x_{i}}\tilde{u}_{1}\cdot\nabla (\partial_{x_{j}}\tilde{u}_{2})|d{\mathbf x} +\chi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\tilde{u}_{1}\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{2})|d{\mathbf x}\notag\\ && -\xi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\partial_{x_{i}x_{j}}\tilde{u}_{1}\cdot\nabla \tilde{u}_{3}|d{\mathbf x} -\xi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\partial_{x_{j}}\tilde{u}_{1}\cdot\nabla (\partial_{x_{i}}\tilde{u}_{3})|d{\mathbf x}\notag\\ && -\xi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\partial_{x_{i}}\tilde{u}_{1}\cdot\nabla (\partial_{x_{j}}\tilde{u}_{3})|d{\mathbf x} -\xi\int_{2\mathbb{T}^{d}}|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})||\tilde{u}_{1}\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{3})|d{\mathbf x}\notag\\ & \leq&\chi\|\nabla \tilde{u}_{2}\|_{L^{\infty}(2\mathbb{T}^{d})}\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})\|\cdot\|\partial_{x_{i}x_{j}}\tilde{u}_{1}\| -\xi\|\nabla \tilde{u}_{3}\|_{L^{\infty}(2\mathbb{T}^{d})}\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})\|\cdot\|\partial_{x_{i}x_{j}}\tilde{u}_{1}\|\notag\\ && +\chi\|\tilde{u}_{1}\|_{L^{\infty}(2\mathbb{T}^{d})}\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})\|\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{2})\| -\xi\|\tilde{u}_{1}\|_{L^{\infty}(2\mathbb{T}^{d})}\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})\|\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{3})\|\notag\\ && +2\chi\sum^{d}\limits_{i=1}\|\nabla \tilde{u}_{1}\|_{L^{\infty}(2\mathbb{T}^{d})}\|\partial_{x_{i}x_{j}}\tilde{u}_{2}\|\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})\| \displaystyle-2\xi\sum^{d}\limits_{i=1}\|\nabla \tilde{u}_{1}\|_{L^{\infty}(2\mathbb{T}^{d})}\|\partial_{x_{i}x_{j}}\tilde{u}_{3}\|\|\nabla (\partial_{x_{i}x_{j}}\tilde{u}_{1})\|. \end{eqnarray}
(39)
Recalling that the Sobolev imbedding \(H^{2}(\mathbb{T}^{d})\hookrightarrow L^{\infty}(\mathbb{T}^{d})\) for \(d\leq3\), we have
\begin{equation}\label{d42} \|g\|_{L^{\infty}(2\mathbb{T}^{d})}\leq{C_{4}\|g\|_{H^2(2\mathbb{T}^{d})}}, \end{equation}
(40)
\begin{equation}\label{d43} \|g\|_{L^{4}(2\mathbb{T}^{d})}\leq{C_{5}\|g\|_{H^2(2\mathbb{T}^{d})}}, \end{equation}
(41)
\begin{equation}\label{d44} \|g\|_{L^{6}(2\mathbb{T}^{d})}\leq{C_{6}\|g\|_{H^2(2\mathbb{T}^{d})}}. \end{equation}
(42)
Notice that
\begin{equation}\label{d45} \begin{cases} \displaystyle\int_{2\mathbb{T}^{d}}\nabla{\tilde{u}_{1}}d{\mathbf x}=\int_{2\mathbb{T}^{d}}\nabla{\tilde{u}_{2}}d{\mathbf x}=\int_{2\mathbb{T}^{d}}\nabla{\tilde{u}_{3}}d{\mathbf x}=0,\\ \displaystyle\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{1}d{\mathbf x}=\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{2}d{\mathbf x} =\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{3}d{\mathbf x}=0. \end{cases} \end{equation}
(43)
Moreover, if \(g\in H^{1}(2\mathbb{T}^{d})\) with \(\int_{2\mathbb{T}^{d}} g=0\), then
\begin{equation}\label{d46} \|g\|\leq(2\pi)^{\frac{d}{4}}\|g\|_{L^{4}(2\mathbb{T}^{d})}\leq C_{7}\|g\|_{H^{1}(2\mathbb{T}^{d})}\leq C_{8}\|\nabla g\|, d\leq3. \end{equation}
(44)
It follows from (43) and (44) that \[\|\partial_{x_{i}}g\|\leq C_{9}\|\nabla(\partial_{x_{i}}g)\|, \|\partial_{x_{i}x_{j}}g\|\leq C_{9}\|\nabla(\partial_{x_{i}x_{j}}g)\|\] and
\begin{equation}\label{d47} \|\nabla{g}\|\leq C_{9}\left(\sum^{d}_{i,j=1,2}\|\partial_{x_{i}x_{j}}g\|^2\right)^{\frac{1}{2}} \leq C^2_{9}\left(\sum_{|\alpha|=2}\|\nabla(D^{\alpha}g))\|^2\right)^{\frac{1}{2}}. \end{equation}
(45)
Together with (40) and (45), we further get
\begin{equation}\label{d48} \|\nabla{g}\|_{L^{\infty}(2\mathbb{T}^{d})}\leq C_{10}\|\nabla{g}\|_{H^{2}(2\mathbb{T}^{d})}\leq C_{11}\|\nabla^{3}{g}\|_{L^{2}(2\mathbb{T}^{d})}. \end{equation}
(46)
Then as a consequence of(40) and (45), one can obtain
\begin{equation}\label{d49} \sum\limits_{|\alpha|=2}J_{1}\leq(\chi-\xi){C_{12}}\|\tilde{\mathbf{w}}\|_{H^{2}(2\mathbb{T}^{d})}\|\nabla^3\tilde{\textbf{w}}\|^2, \end{equation}
(47)
where \(C_{12}:=C_{4}+(1+2d)C_{9}\).
Applying interpolation, we can deduce that for all \(\varepsilon>0\),
\begin{equation}\label{d50} \|\partial_{x_{i}x_{j}}\tilde{u}\|^2 \leq C_{0} \left(\varepsilon\|\nabla(\partial_{x_{i}x_{j}}\tilde{u})\|^2+\frac{\|\tilde{u}\|^2}{4\varepsilon^2}\right). \end{equation}
(48)
By the choice of \(\varepsilon>0\) in (48) such that \(\left(\frac{\alpha^{2}k\eta+\beta\nu^{2}}{2\beta\eta}\right)C_{0}\varepsilon=d_{1}/4\), then
\begin{eqnarray} J_2+J_3\displaystyle&\leq&\alpha k\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{1}\cdot\partial_{x_{i}x_{j}}\tilde{u}_{2}d{\mathbf x} +\gamma\int_{2\mathbb{T}^{d}}\partial_{x_{i}x_{j}}\tilde{u}_{1}\cdot\partial_{x_{i}x_{j}}\tilde{u}_{3}d{\mathbf x}\notag\\ &\leq& \frac{\alpha^{2}k\eta+\beta\gamma^{2}}{2\beta\eta}\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{1}|^{2}d{\mathbf x}+\frac{\beta k}{2}\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{2}|^{2}d{\mathbf x}+\frac{\eta }{2}\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{3}|^{2}d{\mathbf x}\notag\\ &\leq&\frac{d_{1}}{4}\|\nabla(\partial_{x_{i}x_{j}}\tilde{u}_{1})\|^{2} +\frac{\beta k}{2}\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{2}|^{2}d{\mathbf x}+\frac{\eta }{2}\int_{2\mathbb{T}^{d}}|\partial_{x_{i}x_{j}}\tilde{u}_{3}|^{2}d{\mathbf x} +\left (\frac{\alpha^{2}k\eta+\nu^{2}\beta}{8\beta\eta\varepsilon^{2}}\right)C_{0}\|\tilde{u}_{1}\|^{2}. \end{eqnarray}
(49)
Then as a consequence of(40) ,(41),(42)and (45), one can obtain
\begin{equation}\label{d52} \sum\limits_{|\alpha|=2}J_{4}\leq{4\mu C_{10}}\| \widetilde{\textbf{w}}\|_{H^{2}(2\mathbb{T}^{d})}\|\nabla^3\widetilde{\textbf{w}}\|^2. \end{equation}
(50)
Substituting (47), (49)-(50) into (37), we have \begin{eqnarray*} &&\frac{1}{2}\frac{d}{dt}\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}\left\{|D^{\alpha}u_{1}|^2+k|D^{\alpha}u_{2}|^2 +|D^{\alpha}u_{3}|^2\right\}d{\mathbf x}\\ &&+\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}\left\{\frac{d_{1}}{4}|\nabla (D^{\alpha}u_{1})|^2+\frac{kd_{2}}{2}|\nabla (D^{\alpha}u_{2})|^2 +\frac{3d_{3}}{2}|\nabla( D^{\alpha}u_{3})|^2\right\}d{\mathbf x}\\ &&+\frac{\beta{k}}{2}\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}|D^{\alpha}u_{2}|^2d{\mathbf x}+\frac{\eta}{2}\sum_{|\alpha|=2}\int_{\mathbb{T}^{d}}|D^{\alpha}u_{3}|^2d{\mathbf x}\\ &&\leq\hat{C}_2\|{\mathbf w}\|_{H^2(\mathbb{T}^{d})}\|\nabla^3{\mathbf w}\|^2+\hat{C}_{3}\|u_{1}\|^2, \end{eqnarray*} where \(\hat{C}_{2}\) and \(C_{0}\) are the generic constants and \(\hat{C}_{3}=(\frac{\alpha^{2}\eta k +\gamma^{2}}{8 \beta\eta a^{2}})c_{0}.\)This completes the proof of Lemma 7.

Lemma 8. Let \({\textbf w}({\textbf x},t)\) be a solution to the system (4) such that for \(0\leq t\leq T\),

\begin{equation}\label{d53} \|{\mathbf w}({\mathbf \cdot},t)\|_{H^2(\mathbb{T}^{d})} \leq\frac{1}{\hat{C}_2}\min{\left\{\frac{d_{1}}{4}, \frac{kd_{2}}{2}, \frac{3d_{3}}{2}\right\}} \end{equation}
(51)
and
\begin{equation}\label{d54} \|{\mathbf w}({\mathbf \cdot},t)\|\leq{2\hat{C}_1}e^{\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|. \end{equation}
(52)
Then for \(0\leq{t}\leq{T}\),
\begin{equation}\label{d55} \|{\mathbf w}({\mathbf \cdot},t)\|^2_{H^2(\mathbb{T}^{d})} \leq{\hat{C}_4}\left\{\|{\mathbf w}({\mathbf \cdot},0)\|^2_{H^2(\mathbb{T}^{d})}+e^{2\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|^2\right\}, \end{equation}
(53)
where \(\hat{C}_4=\max\{(1+C^2_{9})k, 4\hat{C}^2_1[1+ \hat{C}_3(1+C^2_{9})/(2\lambda_{\max})]\}\geq1\), if \(k\geq1\). \(\hat{C}_4=\max\{(1+C^2_{9})/k, 4\hat{C}^2_1[1+\hat{C}_3(1+C^2_{9})/(2\lambda_{\max}k)]\}\geq 1\), if \(k<1\).

Proof. It follows from (45) that

\begin{equation}\label{d56} \|\nabla{\mathbf w}({\mathbf \cdot},t)\|^2\leq C^2_{9}\sum_{|\alpha|=2}\|D^{\alpha}{\mathbf w}({\mathbf \cdot},t)\|^2. \end{equation}
(54)
So
\begin{equation}\label{d57} \|{\mathbf w}({\mathbf \cdot},t)\|^2_{H^2(\mathbb{T}^{d})}\leq\|{\mathbf w}({\mathbf \cdot},t)\|^2 +(1+C^2_{9})\sum\limits_{|\alpha|=2}\|D^{\alpha}{\mathbf w}({\mathbf \cdot},t)\|^2. \end{equation}
(55)
By Lemma 7 and (51), we infer
\begin{equation}\label{d58} \begin{array}{ll} \displaystyle\frac{d}{dt}\sum\limits_{|\alpha|=2} \int_{\mathbb{T}^{d}}\left\{|D^{\alpha}u_{1}|^2+k|D^{\alpha}u_{2}|^2 +|D^{\alpha}u_{3}|^2\right\}d{\mathbf x} \displaystyle\leq \hat{C}_{3}\|u_{1}\|^2+\leq \hat{C}_{3}\|{\mathbf w}({\mathbf \cdot},t)\|^2. \end{array} \end{equation}
(56)
Integrating (57) and using (52), we conclude
\begin{eqnarray}\label{d59} &&\displaystyle\frac{1}{2}\sum\limits_{|\alpha|=2}\int_{\mathbb{T}^{d}}\left\{|D^{\alpha}u_{1}({\mathbf \cdot},t)|^2+k|D^{\alpha}u_{2}({\mathbf \cdot},t)|^2 +|D^{\alpha}u_{3}({\mathbf \cdot},t)|^2\right\}d{\mathbf x}\notag\\ && \displaystyle\leq\sum\limits_{|\alpha|=2}\int_{\mathbb{T}^{d}}\left\{|D^{\alpha}u_{1}({\mathbf \cdot},0)|^2+k|D^{\alpha}u_{2}({\mathbf \cdot},0)|^2 +|D^{\alpha}u_{3}({\mathbf \cdot},0)|^2\right\}d{\mathbf x} \displaystyle+\frac{4\hat{C}^{2}_{1}\hat{C}_{3}}{\lambda_{\max}}e^{2\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|^{2}. \end{eqnarray}
(57)
We first consider the case \(k\geq1\). By (57), we have \[\sum\limits_{|\alpha|=2}\|D^{\alpha}{\mathbf w}({\mathbf \cdot},t)\|^2\leq k\sum\limits_{|\alpha|=2}\|D^{\alpha}{\mathbf w}({\mathbf \cdot},0)\|^2 +\frac{4\hat{C}^2_1\hat{C}_3}{\lambda_{\max}}e^{2\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|^2.\] We see from this estimate and (55) that
\begin{equation}\label{d60} \|{\mathbf w}({\mathbf \cdot},t)\|^2_{H^2(\mathbb{T}^{d})}\leq\hat{C}_4\left\{\|{\mathbf w}({\mathbf \cdot},0)\|^2_{H^2(\mathbb{T}^{d})}+e^{2\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|^2\right\}, \end{equation}
(58)
where \(\hat{C}_4:=\max\left\{(1+C^2_{9})k, 4\hat{C}^2_1\left[1+\frac{\hat{C}_3(1+C^2_{9})}{\lambda_{\max}}\right]\right\}\). On the other hand, for \(K< 1\), we deduce by (57) that \[ \sum_{|\alpha|=2}\|D^{\alpha}{\mathbf w}({\mathbf \cdot},t)\|^2\leq\frac{1}{K}\left(\sum_{|\alpha|=2}\|D^{\alpha}{\mathbf w}({\mathbf \cdot},0)\|^2 +\frac{4\hat{C}^2_1\hat{C}_3}{\lambda_{\max}}e^{2\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|^2\right). \] This estimate, combined with (52) and (55) gives
\begin{equation}\label{d61} \|{\mathbf w}({\mathbf \cdot},t)\|^2_{H^2(\mathbb{T}^{d})}\leq\hat{C}_4\left\{\|{\mathbf w}({\mathbf \cdot},0)\|^2_{H^2(\mathbb{T}^{d})}+e^{2\lambda_{\max}t}\|{\mathbf w}({\mathbf \cdot},0)\|^2\right\}, \end{equation}
(59)
where \(\hat{C}_4:=\max\left\{\frac{1+C^2_{9}}{k}, 4\hat{C}^2_1\left[1+\frac{\hat{C}_3(1+C^2_{9})}{\lambda_{\max}k}\right]\right\}\). This completes the proof of Lemma 8.

4. Main result

Assume \(\theta\) be a small fixed constant. For \(\delta>0\) arbitrary small, we define the escape time \(T^{\delta}\) by
\begin{equation}\label{d62} \theta=\delta e^{\lambda_{\max}T^{\delta}}, \end{equation}
(60)
where \(\lambda_{\max}\) is the dominant eigenvalue which is the maximal growth rate (see (10)). Obviously,
\begin{equation}\label{d63} T^{\delta}=\frac{1}{\lambda_{\max}}\ln\frac{\theta}{\delta}. \end{equation}
(61)
Our main result in this paper is as follows:

Theorem 2. Suppose that (H\(_{1}\)),(H\(_{2}\))and(H\(_{3}\)) are satisfied. Let \({\textbf w}_{0}({\textbf x})\in{H^2(\mathbb{T}^{d})}\) with \(\|{\mathbf w}_0({\mathbf x})\|=1\). Then there exist constants \(\delta_0>0, \hat{C}>0\), and \(\theta>0\) depending on \(d_{i}, \chi, \xi, \mu, \alpha, \beta, \eta, \gamma, (i=1,2,3)\) such that \(\forall 0< \delta\leq\delta_0\), if the initial perturbation of the steady state \({\mathbf w_{c}}\) is \({\mathbf w}^{\delta}({\mathbf \cdot},0)=\delta{\mathbf w}_0\), then its nonlinear evolution \({\mathbf w}^{\delta}({\mathbf \cdot},t)\) satisfies

\begin{equation}\label{d64} \|{\mathbf w}^{\delta}({\mathbf \cdot},t)-\delta e^{\mathfrak{M}t}{\mathbf w}_{0}({\mathbf x})\| \leq{\hat{C}}\left\{{e}^{-\rho t}+\delta\|{\mathbf w}_0\|^2_{H^{2}(\mathbb{T}^{d})}+\delta{e}^{\lambda_{\max}t}\right\}\delta{e}^{\lambda_{\max}t} \end{equation}
(62)
for \(0\leq{t}\leq{T^{\delta}}\), and \(\rho>0\) is the gap between the largest growth rate \(\lambda_{\max}\) and the rest of \(\mathrm{Re}\lambda_{i}({\mathbf q})\) in (7), \(e^{\mathfrak{M}t}{\mathbf w}_{0}({\mathbf x})\) defined in (16) is the dominant part of the solution of the linearized system (5).

Proof. Let \({\mathbf w}^{\delta}({\mathbf x},t)\) be the solutions to (4) with initial data \({\mathbf w}^{\delta}({\mathbf \cdot},0)=\delta{\mathbf w}_0\). Define

\begin{equation}\label{d65} T^{\ast}=\sup \left\{t \bigg| \left\|{\mathbf w}^{\delta}({\mathbf \cdot},t)- \delta e^{\mathfrak{L}t}{\mathbf w}_0\right\|\leq{\frac{\hat{C}_1}{2}}\delta e^{\lambda_{\max}t}\right\}, \end{equation}
(63)
\begin{equation}\label{d66} T^{\ast\ast}=\sup\left\{t \bigg| \left\|{\mathbf w}^{\delta}({\mathbf \cdot},t)\right\|_{H^2(\mathbb{T}^{d})}\leq{\frac{1}{\hat{C}_2}}\min\left\{\frac{d_{1}}{4}, \frac{kd_{2}}{2}, \frac{3d_{3}}{2}\right\}\right\}. \end{equation}
(64)
From the definition of \(T^{\ast}\) and Lemma 5, for \(\forall 0\leq t\leq{T^{\ast}}\), we can obtain
\begin{equation}\label{d67} \left\|{\mathbf w}^{\delta}(\cdot,t)\right\|\leq\frac{3}{2}\hat{C}_1\delta e^{\lambda_{\max}t}. \end{equation}
(65)
Furthermore, by Lemma 8 and the bootstrap argument, we possess
\begin{equation}\label{d68} \left\|{\mathbf w}^{\delta}(\cdot,t)\right\|_{H^2(\mathbb{T}^{d})}\leq\sqrt{\hat{C}_4}\left\{\delta\|{\mathbf w}_0\|_{H^2(\mathbb{T}^{d})}+\delta e^{\lambda_{\max}t}\right\}. \end{equation}
(66)
Applying Duhamel's principle, we know that the solution of (4)
\begin{equation}\label{d69} \begin{array}{ll} {\mathbf w}^{\delta}(\cdot,t)\displaystyle=\delta e^{\mathfrak{L}t}{\mathbf w}_0 -\int^{t}_{0}{e}^{\mathfrak{L}(t-\tau)} \left[\chi\nabla(u^{\delta}_{1}(\tau)\nabla{u^{\delta}_{2}(\tau)}) \right. \displaystyle +\left.\xi\nabla(u^{\delta}_{1}(\tau)\nabla{u^{\delta}_{3}(\tau)}) +\mu u^{\delta}_{1}(\tau)(1+u^{\delta}_{1}(\tau)), 0, 0\right]d\tau. \end{array} \end{equation}
(67)
It follows from Lemma 5, (40), (44) and Lemma 8 that for \(0\leq{t}\leq\min{\{T^{\delta},T^{\ast},T^{\ast\ast}\}}\),
\begin{equation}\label{d70} \left\|{\mathbf w}^{\delta}({\mathbf \cdot},t)-\delta{e}^{\mathfrak{L}t}{\mathbf w}_0\right\| \leq \hat{C}_1\hat{C}_{5}\int^{t}_0e^{\lambda_{\max}(t-\tau)}\|{\mathbf w}^{\delta}(\tau)\|^{2}_{H^{2}(\mathbb{T}^{d})}d\tau, \end{equation}
(68)
where \(\hat{C}_{5}=\max C^2_{9}\{\chi+\chi\frac{C_{4}}{C^2_{9}}+\xi+\xi\frac{C_{4}}{C^2_{9}}+\mu C_{1}\}\). By (66) and (68), we see that for \(t\leq\min{\{T^{\delta},T^{\ast},T^{\ast\ast}\}}\),
\begin{equation}\label{d71} \left\|{\mathbf w}^{\delta}(\cdot,t)-\delta{e}^{\mathfrak{L}t}{\mathbf w}_0\right\| \leq \hat{C}_1\hat{C}_4\hat{C}_5\left\{\frac{\delta\|{\mathbf w}_{0}\|^2_{H^2}}{\lambda_{\max}}+\frac{\delta e^{\lambda_{\max}t}}{\lambda_{\max}}\right\}\delta e^{\lambda_{\max}t}. \end{equation}
(69)
We now prove that if \(\delta_{0}\) and \(\theta\) are chosen such that
\begin{equation}\label{d72} \begin{array}{ll} \displaystyle\theta< \frac{1}{\hat{C}_{2}\hat{C}_{4}}\min\left\{\frac{\lambda_{\max}}{4}, \frac{d_{1}}{8}, \frac{kd_{2}}{4}, \frac{3d_{3}}{4}\right\}, \end{array} \end{equation}
(70)
and
\begin{equation}\label{d73} \sqrt{\hat{C}_4}\delta_{0}\|{\mathbf w}_0\|_{H^2(\mathbb{T}^{d})}\leq\frac{1}{2\hat{C}_{2}}\min\left\{\frac{d_{1}}{4},\frac{kd_{2}}{2}, \frac{3d_{3}}{2}\right\}, \end{equation}
(71)
as well as
\begin{equation}\label{d74} \hat{C}_4\hat{C}_5\frac{\delta_{0} \|{\mathbf w}_0\|^{2}_{H^2(\mathbb{T}^{d})}}{\lambda_{\max}}< \frac{1}{4}, \end{equation}
(72)
then \(T^{\delta}=\min{\{T^{\delta},T^{\ast},T^{\ast\ast}\}}\) for \(\delta\leq\delta_{0}\).
If \(T^{\ast\ast}\) is the smallest, we can let \(t=T^{\ast\ast}\leq{T^{\delta}}\) in (67). By (70) and (71) we have \[\begin{array}{ll} \displaystyle\left\|{\mathbf w}^{\delta}(T^{\ast\ast})\right\|_{H^2(\mathbb{T}^{d})} \leq\sqrt{\hat{C}_4}\left\|{\mathbf w}^{\delta}_0\right\|_{H^2(\mathbb{T}^{d})}+\sqrt{C_4} \theta \displaystyle< \frac{1}{\hat{C}_{2}}\min\left\{\frac{d_{1}}{4}, \frac{d_{2}}{4}, \frac{d_{3}K}{2}\right\}, \end{array}\] for \(\delta\) sufficiently small and \(\hat{C}_4\geq1\), in contradiction to the definition of \(T^{\ast\ast}\). On the other hand, if \(T^{\ast}\) is the minimum, we can let \(t=T^{\ast}\) in (67), so that \[\begin{array}{ll} \displaystyle\left\|{\mathbf w}^{\delta}({\mathbf \cdot},T^{\ast})-\delta{e}^{\mathfrak{L}T^{\ast}}{\mathbf w}_0\right\| \leq \hat{C}_1\hat{C}_4\hat{C}_5\left\{\frac{\delta\|{\mathbf w}_0\|^{2}_{H^2(\mathbb{T}^{d})}}{\lambda_{\max}} +\frac{\theta}{\lambda_{\max}}\right\}\delta e^{\lambda_{\max}T^{\ast}} \displaystyle< \frac{\hat{C}_1}{2}\delta e^, \end{array} \] for sufficiently small \(\delta_{0}\) in (73) and \(\hat{C}_5/{\hat{C}}_{2}\leq1\). This again contradicts the definition of \(T^{\ast}\). Therefore, the desired assertion follows. Finally, we prove the inequality (62). Notice by (14) that
\begin{eqnarray}\label{d75} &&\left\|{\mathbf w}^{\delta}({\mathbf \cdot},t)-\delta{e}^{\mathfrak{M}t}{\mathbf w}_0\right\|\leq\big\|{\mathbf w}^{\delta}({\mathbf \cdot},t)-\delta{e}^{\mathfrak{L}t}{\mathbf w}_0\big\| +\bigg\|\delta\sum\limits_{{\mathbf q}\in\Lambda_{R1}}\sum\limits_{i\in I\setminus I_{1}}w_{i}({\mathbf q}){\mathbf r}_{i}({\mathbf q})e^{\lambda_{i}t}e_{{\mathbf q}}({\mathbf x})\bigg\|\nonumber\\ &&+\bigg\|\delta\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R1}\setminus\Lambda_{R1}}\sum\limits_{i\in I}w_{i}({\mathbf q}){\mathbf r}_{i}({\mathbf q})e^{\lambda_{i}t}e_{{\mathbf q}}({\mathbf x})\bigg\|\nonumber\\ &&+\bigg\|\delta\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R2}}\left\{\left[w_{d}({\mathbf q}){\mathbf r}_{d}({\mathbf q})+w'_{d}({\mathbf q})({\mathbf r}'_{d}({\mathbf q})+{\mathbf r}_{d}({\mathbf q})t)\right]e^{\lambda_{d}({\mathbf q})t}\right.\left.+w_{s}({\mathbf q}){\mathbf r}_{s}({\mathbf q})e^{\lambda_{s}({\mathbf q})t}\right\}e_{{\mathbf q}}({\mathbf x})\bigg\|\nonumber\\ &&+\bigg\|\delta\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{R3}}\left[w({\mathbf q}){\mathbf r}({\mathbf q})+w'({\mathbf q})\left({\mathbf r}'({\mathbf q})+{\mathbf r}({\mathbf q})t\right)\right.+w''({\mathbf q})\left.\left({\mathbf r}''({\mathbf q})+{\mathbf r}'({\mathbf q})t+{\mathbf r}({\mathbf q})t^{2}\right)\right]e^{\lambda({\mathbf q})t} e_{{\mathbf q}}({\mathbf x})\bigg\|\nonumber\\ &&+\bigg\|\delta\sum\limits_{{\mathbf q}\in\Lambda_{C1}}w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q})e^{\lambda_{r}({\mathbf q})t}e_{{\mathbf q}}({\mathbf x})\bigg\|+\bigg\|\delta\sum\limits_{{\mathbf q}\in\Lambda_{C2}}\left[w^{\mathrm{Re}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] -\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right.\nonumber\\ &&\left.+w^{\mathrm{Im}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] +\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right]e^{(\mathrm{Re}\lambda_{c}({\mathbf q}))t}e_{{\mathbf q}}({\mathbf x})\bigg\|\nonumber\\ &&+\bigg\|\delta\sum\limits_{{\mathbf q}\in\mathbb{N}^{d}_{C}\setminus\Lambda_{C3}}\left\{\left[w^{\mathrm{Re}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] -\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right.\right.\nonumber\\ &&\left.\left.+w^{\mathrm{Im}}({\mathbf q})\left(\mathrm{Re}{\mathbf r}_{c}({\mathbf q})\sin[(\mathrm{Im}\lambda_{c}({\mathbf q}))t] +\mathrm{Im}{\mathbf r}_{c}({\mathbf q})\cos[(\mathrm{Im}\lambda_{c}({\mathbf q}))t]\right)\right]e^{(\mathrm{Re}\lambda_{c}({\mathbf q}))t}\right.+\left.w_{r}({\mathbf q}){\mathbf r}_{r}({\mathbf q})e^{\lambda_{r}({\mathbf q})t}\right\}e_{{\mathbf q}}({\mathbf x})\bigg\|\nonumber\\ &&:=\left\|{\mathbf w}^{\delta}({\mathbf \cdot},t)-\delta{e}^{\mathfrak{L}t}{\mathbf w}_0\right\|+J_{6}+J_{7}+J_{8}+J_{9}+J_{10}+J_{11}+J_{12}. \end{eqnarray}
(73)
We next estimate each term \(J_{i} (i=6,7,8,\cdots,12)\) on the right-hand sides of (73). It is not difficult to know that there are finitely many values \({\mathbf q}\in \mathbb{N}^{d}\) satisfying \(\mathrm{Re}\lambda_{i}({\mathbf q})=\lambda_{\max}\) and \(|{\mathbf q}|\) is bounded for each \({\mathbf q}\in{\mathbb{N}^{d}}_{\max}\). For each \({\mathbf q}\in \mathbb{N}^{d}\), \(q^{2} < N\) there exists a constant \(C_{*}>0\) such that
\begin{equation}\label{d76} \begin{cases} |{\mathbf r}_{1}({\mathbf q})|, |{\mathbf r}_{2}({\mathbf q})|, |{\mathbf r}_{3}({\mathbf q})|\leq C_{*},&{\mathbf q}\in\mathbb{N}^{d}_{R1},\\ |{\mathbf r}_{d}({\mathbf q})|, |{\mathbf r}'({\mathbf q})|, |{\mathbf r}_{s}({\mathbf q})|\leq C_{*},&{\mathbf q}\in\mathbb{N}^{d}_{R2},\\ |{\mathbf r}({\mathbf q})|, |{\mathbf r}'({\mathbf q})|, |{\mathbf r}''({\mathbf q})|\leq C_{*},&{\mathbf q}\in\mathbb{N}^{d}_{R3},\\ |\mathrm{Re}{\mathbf r}_{c}({\mathbf q})|, |\mathrm{Im}{\mathbf r}_{c}({\mathbf q})|, |{\mathbf r}_{r}({\mathbf q})|\leq C_{*},&{\mathbf q}\in\mathbb{N}^{d}_{C}. \end{cases} \end{equation}
(74)
By the similar method to prove (32), using (74) and (13) there exists a constant \(C_{**}>0\) such that
\begin{equation}\label{d77} \begin{cases} |w_{1}({\mathbf q})|, |w_{2}({\mathbf q})|, |w_{3}({\mathbf q})|\leq C_{**}|{\mathbf w}_{{\mathbf q}}|,&{\mathbf q}\in\mathbb{N}^{d}_{R1},\\ |w_{d}({\mathbf q})|, |w'_{d}({\mathbf q})|, |w_{s}({\mathbf q})|\leq C_{**}|{\mathbf w}_{{\mathbf q}}|,&{\mathbf q}\in\mathbb{N}^{d}_{R2},\\ |w({\mathbf q})|, |w'({\mathbf q})|, |w''({\mathbf q})|\leq C_{**}|{\mathbf w}_{{\mathbf q}}|,&{\mathbf q}\in\mathbb{N}^{d}_{R3},\\ |w^{\mathrm{Re}}({\mathbf q})|, |w^{\mathrm{Im}}({\mathbf q})|, |w_{r}({\mathbf q})|\leq C_{**}|{\mathbf w}_{{\mathbf q}}|,&{\mathbf q}\in\mathbb{N}^{d}_{C} \end{cases} \end{equation}
(75)
and
\begin{equation}\label{d78} te^{\lambda_{d}({\mathbf q})t}\leq C_{**}, \mathrm{for} {\mathbf q}\in\mathbb{N}^{d}_{R2}, te^{\lambda({\mathbf q})t}, t^{2}e^{\lambda({\mathbf q})t}\leq C_{**}, \mathrm{for} {\mathbf q}\in\mathbb{N}^{d}_{R3}. \end{equation}
(76)
By (29), (32),(74), (75) and \(\|{\mathbf w}_{0}\|=1\), there exists a constant \(\hat{C}_{6}>0\) such that \[\displaystyle J^{2}_7\leq\delta^{2} \hat{C}^{2}_6 e^{2(\lambda_{\max}-\rho)t}\left(\frac{\pi}{2}\right)^{d}\sum\limits_{{\mathbf q}\in\Lambda_{R1}}|{\mathbf w}_{{\mathbf q}}|^{2}\leq\delta^{2} \hat{C}^{2}_6 e^{2(\lambda_{\max}-\rho)t}\|{\mathbf w}_{0}\|^{2}\leq\delta^{2} \hat{C}^{2}_6 e^{2(\lambda_{\max}-\rho)t},\] that is,
\begin{equation}\label{d79} J_6\leq\delta\hat{C}_6 e^{(\lambda_{\max}-\rho)t}. \end{equation}
(77)
Moreover,
\begin{equation}\label{d80} J_7\leq\delta e^{(\lambda_{\max}-\rho)t}. \end{equation}
(78)
Similarly, there exists a constant \(\hat{C}_{7}>0\) such that
\begin{equation}\label{d81} J_i\leq\delta\hat{C}_7 e^{(\lambda_{\max}-\rho)t}, i=8,\cdots,12. \end{equation}
(79)
Substituting (69), (77)-(81) into (73) yields \begin{eqnarray*} \displaystyle\left\|{\mathbf w}^{\delta}({\mathbf \cdot},t)-\delta{e}^{\mathfrak{M}t}{\mathbf w}_0\right\| &\leq&\hat{C}_1\hat{C}_4\hat{C}_5\left\{\frac{\delta\|{\mathbf w}_{0}\|^2_{H^2}}{\lambda_{\max}}+\frac{\delta e^{\lambda_{\max}t}}{\lambda_{\max}}\right\}\delta e^{\lambda_{\max}t} +\hat{C}_6\delta e^{(\lambda_{\max}-\rho)t}+\delta e^{(\lambda_{\max}-\rho)t}+5\hat{C}_7 \delta e^{(\lambda_{\max}-\rho)t}\\ &\leq&\left\{(1+\hat{C}_{6}+5\hat{C}_7)e^{-\rho t}+\frac{\hat{C}_1\hat{C}_4\hat{C}_5}{\lambda_{\max}}\left(\delta\|{\mathbf w}_0\|^2_{H^2(\mathbb{T}^{d})} +\delta e^{\lambda_{\max}t}\right)\right\}\delta e^{\lambda_{\max}t}\\ &\leq&{\hat{C}}\left\{e^{-\rho t}+\delta\|{\mathbf w}_0\|^2_{H^2(\mathbb{T}^{d})}+\delta e^{\lambda_{\max}t}\right\}\delta e^{\lambda_{\max}t}, \forall 0\leq{t}\leq{T^{\delta}}, \end{eqnarray*} where \(\hat{C}:=\max\{1+\hat{C}_{6}+5\hat{C}_7, \frac{\hat{C}_1\hat{C}_4\hat{C}_5}{\lambda_{\max}}\}\) and thereby completes the proof.

Corollary 1.(Nonlinear instability). Let the conditions ( \({\mathbf H_{1}}\)),(\({\mathbf H_{2}}\)) and (\({\mathbf H_{3}}\) ) are holds. Then the positive constant equilibrium point \({\mathbf w_{c}}\) of the problem (1) is nonlinearly unstable in the sense of the \(L^{2}\)-norm.

Proof. Notice that \({\mathbf L}_{{\mathbf q_{0}}}\) has an eigenvalue \(\mathrm{Re}\lambda_{{\mathbf q_{0}}} = \lambda_{\max}\), if there exists \({\mathbf q}_0=(q_{01},\ldots,q_{0d})\in{\mathbb{N}^{d}}_{\max}\), and denote the corresponding eigenvector by \({\mathbf r}_{{\mathbf q_{0}}}\). Assume \[{\mathbf w}_0({\mathbf x})=\kappa\frac{{\mathbf r}({\mathbf q}_0)}{|{\mathbf r}({\mathbf q}_0)|}e_{{\mathbf q}_0}({\mathbf x})\] with \(\kappa=1/\|e_{{\mathbf q}_0}\|=\sqrt{(2/\pi)^{d}}\) so that \(\|{\mathbf w}_{0}(x)\|=1\). In addition, if \(t=T^{\delta}\) then for \(\delta\) sufficiently small, we require

\begin{equation} \begin{cases} \displaystyle\delta\|{\mathbf w}_0({\mathbf x})\|^{2}_{H^2(\mathbb{T}^{d})}\leq \frac{1}{4 \hat{C}},\\ \displaystyle e^{-\rho T^{\delta}}=\left(\frac{\delta}{\theta}\right)^{\frac{\rho}{\lambda_{\max}}}< \frac{1}{8\hat{C}},\\ \displaystyle\theta=\delta e^{\lambda_{\max}T^{\delta}}< \frac{1}{8\hat{C}}. \end{cases} \end{equation}
(80)
It follows from Theorem 2 and(80) that
\begin{equation} \begin{array}{ll} \displaystyle\|\delta{e}^{\mathfrak{M}T^{\delta}}{\mathbf w}_0\|-\|{\mathbf w}^{\delta}({\mathbf \cdot},T^{\delta})\| \leq\|{\mathbf w}^{\delta}({\mathbf \cdot},T^{\delta})-\delta{e}^{\mathfrak{M}T^{\delta}}{\mathbf w}_0\| \displaystyle\leq{\hat{C}}\left\{e^{-\rho T^{\delta}}+\delta\|{\mathbf w}_0\|^2_{H^2(\mathbb{T}^{d})}+\theta\right\}\theta \displaystyle< \frac{1}{2}\theta. \end{array} \end{equation}
(81)
Notice that the dominant part of the solution of the linearized system (5) satisfies
\begin{equation} \|\delta e^{\mathfrak{M}T^{\delta}}{\mathbf w}_0\|=\|\delta e^{\lambda_{\max}T^{\delta}}{\mathbf w}_0\|=\delta e^{\lambda_{\max}T^{\delta}}=\theta. \end{equation}
(82)
By (81) and (82), we deduce that \[\|{\mathbf w}^{\delta}({\mathbf \cdot},T^{\delta})\|>\frac{1}{2}\theta>0.\]

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Horstmann, D. (2003). From 1970 until present: the Keller-Segel model in chemotaxis and its consequences. Jahresbericht der Deutschen Mathematiker-Vereinigung 105(3), 103-165.[Google Scholor]
  2. Jäger, W., & Luckhaus, S. (1992). On explosions of solutions to a system of partial differential equations modelling chemotaxis. Transactions of the american mathematical society, 329(2), 819-824.[Google Scholor]
  3. Winkler, M. (2010). Aggregation vs. global diffusive behavior in the higher-dimensional Keller--Segel model. Journal of Differential Equations, 248(12), 2889-2905.[Google Scholor]
  4. Winkler, M. (2013). Finite-time blow-up in the higher-dimensional parabolic–parabolic Keller--Segel system. Journal de Mathématiques Pures et Appliquées, 100(5), 748-767.[Google Scholor]
  5. Keller, E. F., & Segel, L. A. (1970). Initiation of slime mold aggregation viewed as an instability. Journal of theoretical biology, 26(3), 399-415.[Google Scholor]
  6. Osaki, K., & Yagi, A. (1999). Structure of the stationary solution to Keller--Segel equation in one dimension (Nonlinear Evolution Equations and Applications). Journal of Mathematical Understanding and Analysis, (1105), 1-9.[Google Scholor]
  7. Horstmann, D., & Wang, G. (2001). Blow--up in a chemotaxis model without symmetry assumptions. European Journal of Applied Mathematics, 12(2), 159-177.[Google Scholor]
  8. Herrero, M. A., & Velázquez, J. J. (1996). Singularity patterns in a chemotaxis model. Mathematische Annalen, 306(1), 583-623.[Google Scholor]
  9. Winkler, M. (2011). Blow-up in a higher--dimensional chemotaxis system despite logistic growth restriction. Journal of Mathematical Analysis and Applications, 384(2), 261-272.[Google Scholor]
  10. Horstmann, D. (2004). From 1970 until present: the Keller--Segel model in chemotaxis and its consequences. IIJahresbericht der Deutschen Mathematiker-Vereinigung 106, 51-69.[Google Scholor]
  11. Horstmann, D. (2011). Generalizing the Keller--Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species. Journal of nonlinear science, 21(2), 231-270.[Google Scholor]
  12. Tello, J. I., & Winkler, M. (2007). A chemotaxis system with logistic source. Communications in Partial Differential Equations, 32(6), 849-877.[Google Scholor]
  13. Aida, M., & Yagi, A. (2004). Target pattern solutions for chemotaxis--growth system. Scientiae Mathematicae Japonicae, 59(3), 577-590.[Google Scholor]
  14. Kurata, N. (2008). Bifurcation phenomena of pattern solution to Mimura--Tsujikawa model in one dimension, GAKUTO International Series. Mathematical Sciences and Applications, 29, 265-278.[Google Scholor]
  15. Painter, K. J., & Hillen, T. (2011). Spatio--temporal chaos in a chemotaxis model. Physica D: Nonlinear Phenomena, 240(4-5), 363-375.[Google Scholor]
  16. Okuda, T., & Osaki, K. (2011). Bifurcation of hexagonal patterns in a chemotaxis--diffusion--growth system. Nonlinear Analysis: Real World Applications, 12(6), 3294-3305.[Google Scholor]
  17. Kuto, K., Osaki, K., Sakurai, T., & Tsujikawa, T. (2012). Spatial pattern formation in a chemotaxis--diffusion--growth model. Physica D: Nonlinear Phenomena, 241(19), 1629-1639.[Google Scholor]
  18. Banerjee, S., Misra, A. P., & Rondoni, L. (2012). Spatiotemporal evolution in a (2+ 1)-dimensional chemotaxis model. Physica A: Statistical Mechanics and its Applications, 391(1-2), 107-112.[Google Scholor]
  19. Guo, Y., & Hwang, H. J. (2010). Pattern formation (I): the Keller--Segel model. Journal of Differential Equations, 249(7), 1519-1530.[Google Scholor]
  20. Fu, S., & Liu, J. (2013). Spatial pattern formation in the Keller--Segel model with a logistic source. Computers & Mathematics with Applications, 66(3), 403-417.[Google Scholor]
  21. Zhang, T., & Zang, H. (2014). Delay-induced Turing instability in reaction--diffusion equations. Physical Review E, 90(5), 052908.[Google Scholor]
  22. Zhang, T., Xing, Y., Zang, H., & Han, M. (2014). Spatio--temporal dynamics of a reaction-diffusion system for a predator--prey model with hyperbolic mortality. Nonlinear Dynamics, 78(1), 265-277.[Google Scholor]
  23. Okubo, A. (1980). Diffusion and ecological problems: mathematical models, Biomathematics. Springer-Verlag, Berlin Heidelberg.[Google Scholor]
  24. Segel, L. A., & Jackson, J. L. (1972). Dissipative structure: an explanation and an ecological example. Journal of theoretical biology, 37(3), 545-559.[Google Scholor]
  25. Peng, Y., & Zhang, T. (2014). Stability and Hopf bifurcation analysis of a gene expression model with diffusion and time delay. In Abstract and Applied Analysis (Vol. 2014). Hindawi. Article ID 738682.[Google Scholor]
  26. Tang, X., & Song, Y. (2015). Cross-diffusion induced spatiotemporal patterns in a predator--prey model with herd behavior. Nonlinear Analysis: Real World Applications, 24, 36-49.[Google Scholor]
  27. Almirantis, Y., & Papageorgiou, S. (1991). Cross--diffusion effects on chemical and biological pattern formation. Journal of theoretical biology, 151(3), 289-311.[Google Scholor]
  28. Song, Y., Zhang, T., & Peng, Y. (2016). Turing–Hopf bifurcation in the reaction--diffusion equations and its applications. Communications in Nonlinear Science and Numerical Simulation, 33, 229-258.[Google Scholor]
  29. Guo, Y., & Strauss, W. A. (1995). Instability of periodic BGK equilibria. Communications on Pure and Applied Mathematics, 48(8), 861-894.[Google Scholor]
  30. Hu, Z., Teng, Z., & Jiang, H. (2012). Stability analysis in a class of discrete SIRS epidemic models. Nonlinear Analysis: Real World Applications, 13(5), 2017-2033.[Google Scholor]
  31. Fan, S. (1989). A new extracting formula and a new distinguishing means on the one variable cubic equation. Nat. Sci. J. Hainan Teach. Coll, 2(2), 91-98.[Google Scholor]
]]>
Modeling the movement of particles in tilings by Markov chains https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/modeling-the-movement-of-particles-in-tilings-by-markov-chains/ Mon, 22 Jun 2020 16:28:46 +0000 https://old.pisrt.org/?p=4198
OMA-Vol. 4 (2020), Issue 1, pp. 84 - 97 Open Access Full-Text PDF
Zirhumanana Balike, Arne Ring, Meseyeki Saiguran
Abstract: This paper studies the movement of a molecule in two types of cell complexes: the square tiling and the hexagonal one. This movement from a cell \(i\) to a cell \(j\) is referred to as an homogeneous Markov chain. States with the same stochastic behavior are grouped together using symmetries of states deduced from groups acting on the cellular complexes. This technique of lumpability is effective in forming new chains from the old ones without losing the primitive properties and simplifying tedious calculations. Numerical simulations are performed using R software to determine the impact of the shape of the tiling and other parameters on the achievement of the equilibrium. We start from small square tiling to small hexagonal tiling before comparing the results obtained for each of them. In this paper, only continuous Markov chains are considered. In each tiling, the molecule is supposed to leave the central cell and move into the surrounding cells.
]]>

Open Journal of Mathematical Analysis

Modeling the movement of particles in tilings by Markov chains

Zirhumanana Balike\(^1\), Arne Ring, Meseyeki Saiguran
Department of Mathematics and Physics, Institut Supérieur Pédagogique de Bukavu, Democratic Republic of the Congo.; (Z.B)
Department of Mathematics, University of the Free State, South Africa.; (A.R)
Department of Mathematical Sciences, St. Johns University of Tanzania, Tanzania.; (M.S)
\(^1\)Corresponding Author: dieudonne.z.balike@aims-senegal.org

Abstract

This paper studies the movement of a molecule in two types of cell complexes: the square tiling and the hexagonal one. This movement from a cell \(i\) to a cell \(j\) is referred to as an homogeneous Markov chain. States with the same stochastic behavior are grouped together using symmetries of states deduced from groups acting on the cellular complexes. This technique of lumpability is effective in forming new chains from the old ones without losing the primitive properties and simplifying tedious calculations. Numerical simulations are performed using R software to determine the impact of the shape of the tiling and other parameters on the achievement of the equilibrium. We start from small square tiling to small hexagonal tiling before comparing the results obtained for each of them. In this paper, only continuous Markov chains are considered. In each tiling, the molecule is supposed to leave the central cell and move into the surrounding cells.

Keywords:

Markov Chains, hexagonal tiling, square tiling, symmetries.

1. Introduction

Living organisms consist of one or more tiny components of several types and shapes termed cells on which molecules move in continuous random motion. These cells and the molecules can be considered as subdivisions of a 2-dimensional plane on which particles randomly move. The plane can be much wider, but considering that each molecule moving on the plane has a starting cell, we can restrict this movement to a few groups of cells. The knowledge obtained from this small group of cells can be extended to improve our understanding of the molecules movement on a larger scale. The shape of the cells dictates the different random possibilities of a molecule movement to neighboring cells from the starting cell.

A cell can assume different shapes, including square and hexagonal shapes. In both square and hexagonal tilings assumed by a cell, the set of all possibilities of a molecule moving towards a neighboring cell can be seen as a Markov chain \(\{X_t , t > 0\}\) [1]. This Markov chain is driven by a parameter \(p\) which represents the probability for the molecule under study to move from one cell to a neighboring cell. A Markov chain can be discrete or continuous depending on whether the time considered is discrete or continuous [2].

A recent study made in [1] on this topic considered discrete time process. It was demonstrated that the molecule is faster in the hexagonal tiling than in the square tiling.

In this paper, we will look at the continuous process and compare the result with those found in the discrete process. We will examine how the probability impacts the movement of a molecule from cell \(i\) to cell \(j\). When a molecule moves from a cell \(i\) to \(j\), the possible next step of the movement depends on the number of cells enclosing it. For example, from the central cell of the square tiling , a molecule has four possibilities to move to while there are six possibilities in the hexagonal tiling.

In the aforementioned study ([1]), two starting positions were considered: the central cell and the surrounding ones. We only consider the central cell to be the starting position of the molecule since each cell (even the border cells) can be considered as central by enlarging the plane.

Infinitesimal generators in continuous time will replace the transition matrices in discrete time to describe the movement of the molecule. In this study, the space is discrete.

Sometimes, the transition matrices can be very large and almost impossible to handle for doing computations. In order to reduce the calculations, we will use the state symmetries after identifying the non-equivalent cells in each tiling, then we will lump states with ²the same properties [3]. Symmetric groups afford a precise definition of structural equivalence for Markov chains states in aggregating them to making a partition of the original Markov process in small subsets that conserve all the previous properties [4]. This aggregation results in a new Markov chain (aggregated chain) with fewer number of states such that the finite probabilities of aggregated states equals the finite probabilities of the corresponding states of the initial Markov chain [5].

The specific questions we want to address include:
  • (1) What is the effect of the discrete or continuous nature of time in the oscillatory movement of the molecule?
  • (2) What is the effect of the probability, the time and the shape of the tiling in the attainment of the equilibrium in continuous Markov process under consideration ?

2. Markov chains

2.1. Definitions

Definition 1. A sequence of random variables \(\{X_n\}_{n_{\geq 0}}\) in a countable space E is called stochastic process. E is called states space whose elements will be written \textit{i, j, k, ...}.

When \(X_n = i\), the process is in the state \(i\) or visits the state \(i\) at the time \(n\). Sequences of random variables which are independent and identically distributed are stochastic process but they do not take into account the dynamic of evolution of systems due to their independence. To introduce this dynamic, one must take into account the influence of the past, which Markov chains do, like the equation of recurrence in deterministic systems [2]. Then we introduce the following:

Definition 2. For all \(n\in \mathbb{N}\) and all states \(i_0 , i_1, i_2, i_3,...,i_{n-1},i,j \in E\),

\begin{equation}\label{PropertyMarkov} P(x_{n+1}=j\arrowvert X_n =i, X_{n-1}=i_{n-i},\cdots ,X_0 =i_0)=P(X_{n+1}=j\arrowvert X_n =1) \end{equation}
(1)
then the process \(\{X_n\}_{n_{\geq 0}}\) is called Markov chain.

The Equation (1) is called Markov property. The matrix \(P=\{p_{ij}\}_{i,j\in E},\) where \begin{equation*} p_{ij}=P(X_{n+1}=j\arrowvert X_n =i) \end{equation*} is the probability to move from \(i\) to \(j\), is called transition matrix of the chain.
Since all \(p_{ij}\) are probabilities and the transition happens from a state \(i\) to a state \(j\), one has \(p_{ij}\geq 0\) and \[\sum_{k\in E} p_{ik}=1,~~ \forall i,j.\] A matrix indexed by E and satisfying the above properties is a stochastic matrix.
A Markov chain is said to be \textit{discrete time} if the state space of the possible outcomes of the process is finite.

2.2. Continuous-time Markov chains

Definition 3. A continuous-time Markov chain \(X(t)\) is defined by two components: a jump chain, and a set of holding time parameters \(\lambda_i.\) The jump chain consists of a countable set of states \(S\subset\{0,1,1,...\}\) along with transition probabilities \(p_{ij}\). We assume \(p_{ii}=0\), for all non-absorbing states \(i\in S\). We assume that:

  • 1) If \(X(t)=i\), the time until the state changes has exponential \((\lambda_i)\) distribution;
  • 2) If \(X(t)=j\), the next state will be in \(j\) with probability \(p_{ij}\).
The process satisfies the Markov property 1.

For a continuous Markov chain, the Equation (1) can be rewritten as follows:
\begin{equation}\label{PropertyMarkov2} P_{ij}(t)=P(X(t+s)=j|X(s)=i)=P(X(t)=j|X(0)=i)~~\forall s,t \in (0,+\infty). \end{equation}
(2)
This chain is homogeneous if the second member of (2) does not depend on the time \(t\). If (2) is a system of differential equations that does not depend on \(t\), it is said to be an autonomous system ([6]) whose stability depends on the signs of its eigenvalues. We can then define the transition matrix, \(P(t)\).
Assuming the states are \(1, 2,..., r\), then the state transition matrix for any \(t\geq 0\) is given by
\begin{equation}\label{transtionmatrixCTMC} P(t)=\begin{pmatrix} p_{11}(t) & p_{1}(t) & \cdots & p_{1r}(t) \\ p_{21}(t) & p_{22}(t) & \cdots & p_{2r}(t) \\ \vdots & \vdots & \vdots & p_{2r}(t) \\ p_{r1}(t) &p_{r2}(t) &\cdots & p_{rr}(t) \end{pmatrix}. \end{equation}
(3)
Let \(X(t)\) be a continuous-time Markov chain with transition matrix \(P(t)\) and state space \(S=\{0,1,2,...\}\). A probability distribution \(\pi\) on \(S\) i.e, a vector \(\pi =[\pi_1, \pi_2,\pi_3,..]\), where \(\pi \in [0,1]\) and $$\sum_{i} \pi_i = 1$$ is said to be a stationary distribution for \(X(t)\) if
\begin{equation}\label{eq:stationarydistribution} \pi =\pi P(t),~~~ \forall t\geq 0. \end{equation}
(4)
The intuition here is exactly the same as in the case of discrete-time chains. If the probability distribution of \(X(0)\) is \(\pi\), then the distribution of \(X(t)\) is also given by \(\pi\), for any \(t\geq 0\). The Equation (3) is solution to the so called backward Chapman-Kolmogorov equation below [7]
\begin{equation}\label{equation diff of probability matrix} P'(t)=GP(t). \end{equation}
(5)
Calculation of Equation (5) may be cumbersome and tedious. This hindrance can be overcome by using lumpability if the transition matrix satisfies some conditions (see [8], [9] and [3]).
The following definition from [9] is important for the suit of this paper.

Definition 4. Let \(\{X_t\}\) be a Markov chain with state space \(S=\{1,2,\cdots,r\}\) and initial vector \(\pi\). Given a partition \(\bar{S}=\{E_1, E_2, \cdots, E_v\}\) of the space \(S\), a new chain \(\bar{X}_n\) can be defined as follows: At the \(jth\) step, the state of a new chain is the set \(E_k\) when \(E_k\) contains the state of the \(jth\) step of the original chain.

Precisely, a continuous Markov chain is said to be lumpable with respect to the partition \(\bar{S}\) if for \(i,j \in E_\eta\),
\begin{equation} \sum_{k\in E_\theta} p_{ik}(t)=\sum_{k\in E_\theta}p_{ij}(t), \forall t\ge 0. \end{equation}
(6)
According to [8], a Markov chain X whose transition probability matrix from state \(i\) to state \(j\) denoted by \(p_{ij}\) is lumpable with respect to the partition \(\bar{S}\) if and only if for every pair of sets \(E_\eta\) and \(E_\theta\), \(\sum_{k} \in E_\eta p_{ik}\) has the same value for every \(e_i\) in \(E_\theta\). These common values form the transition probabilities \(p_{\eta, \theta}\) for the lumped chain. Moreover, one has the following theorem from [9].

Theorem 1. Let X(t) be an irreducible continuous-time Markov chain with stationary distribution \(\pi\). If it is lumpable with respect to a partition of the state space, then the lumped chain also has a stationary distribution \(\bar{\pi}\) whose components can be obtained from \(\pi\) by adding corresponding components in the same cell of partition.

Infinitesimal Generator of Continuous-time Markov chains

The infinitesimal generator matrix, usually shown by G, gives us an alternative way of analyzing continuous-time Markov chains. Consider a continuous-time Markov chain \(X(t)\). Assume \(X(0)=i\). The chain will jump to the next state at time \(T_1\), where \(T_1\sim Exponential(\lambda_i)\). In particular, for a very small \(\delta \geq 0\), we can write \begin{equation*} P(T_1 \leq \delta)=1-e^{-\lambda_i \delta}\simeq 1-(1-e^{-\lambda_i \delta}) =\lambda_i\delta. \end{equation*} Thus, in a short interval of length, \(\delta\) the probability of leaving state \(i\) is approximately \(\lambda_i \delta\). For this reason, \(\lambda_i\) is often called the transition rate out of state \(i\). Formally, we can write
\begin{equation}\label{transitionrateofstate} \lambda_i =\lim\limits_{\delta\longrightarrow 0^{+}}\left[ \frac{P(X(\delta)\neq i|X(0)=i)}{\delta}\right]. \end{equation}
(7)
More details and the following definition may be found in [10].

Definition 6. For a continuous-time Markov chain, we define the generator matrix G. The (i,j)th entry of the transition matrix is given by

\begin{equation} g_{ij}=\left\lbrace \begin{array}{cc} \lambda_i p_{ij}& if ~~i\neq j; \\ -\lambda_i &if~~ i=j. \end{array} \right. \label{generatorEquation1} \end{equation}
(8)
An infinitesimal generator always satisfies the equation
\begin{equation} \sum_j g_{ij}=0. \end{equation}
(9)
For an infinitesimal generator to be lumpable, it must satisfy the condition contained in the following definition that the reader can check in [9].

Definition 6. We say that an infinitesimal generator G is lumpable if

\begin{equation} \sum_{k\in E_\theta} g_{ik}=\sum_{k\in E_\theta}g_{jk},~~ for ~~i,j \in E_\eta. \end{equation}
(10)

First hitting times

Definition 7. Let (\(X_t)_{t\ge 0}\) be a Markov chain with generator matrix G. The hitting time of a subset A\(\subset \)S is the random variable $$\tau^A (\omega )=inf\{t\ge 0|X_t(\omega)\in A\}$$ with the usual convention \(inf\emptyset =\infty\).

Theorem 2. The vector of mean hitting times \(k^A=\{ k_i^A|i\in S\}\) is the minimal nonnegative solution of

\begin{equation}\label{hittingtime} \begin{cases} k_i^A = 0~&~i\in A;\\ \sum_{j\in S}g_{ij}k_j^A =-1,& i\notin A. \end{cases} \end{equation}
(11)
The reader can find out more about this in [11].

3. Investigation of the movement on small tiling in continuous time

In this section we investigate the motion of a molecule in two small tilings: the square tiling and the hexagonal one. This movement from a cell \(i\) to a cell \(j\) is considered as being an homogeneous Markov chain. States with the same stochastic behavior are lumped together using symmetries of states deduced from groups acting on the cellular complexes. According to [12], the group acting on a polygon is a dihedral group. In the particular case of the small square tiling, we have the symmetric group \(S_9\) and for the hexagonal tiling we have \(S_7\). Thanks to these groups, we will use the technique of lumpability. This lumpability is effective in forming new chains from the old ones without losing the primitive properties and simplifying tedious calculations.
At each step, the molecule is supposed to leave the central cell and move into the surrounding cells. In [1], it is shown that the movement of biological molecule on tilings (either square or hexagonal) can be modeled by a (discrete time) Markov chain. We will extend this movement of biological molecule on small tiling in continuous time.

3.1. Continuous-time process in small square tilng

We already have important results from previous works on discrete-time Markov process in small cell complexes ([1]). We want to extend this study to the continuous case especially in the square tiling. We will assume a discrete space throughout this study.

Figure 1. Small square tiling.

As already highlighted, a molecule is supposed to be at the central cell (cell 1-1 on Figure 1) at the beginning of the motion. When coming from this position, the molecule can immediately move to one of the following neighboring cells: 2-1, 2-3, 2-5 and 2-7. Thus, the probability of moving to each one of them is the same. However,to move to the cells at the corners, the molecule will move in two steps: the first is the transit to the surrounding cell and the second to the corner. This means that there is also the same probability to move to each corner cell. But this probability differs from the preceding. In the paragraph below, we analyze this to show how to reduce calculations of the infinitesimal generator.
3.1.1. Infinitesimal generator and probability matrix
The molecule has four possibilities of moving to neighboring state with, assume, probability \(p\). All cells can be reached in one step from the center except those located at the corner (corner cells) of the tiling. Therefore, the infinitesimal generator, \(G\) for a square tiling takes the form
\begin{equation} \label{eq: small square unlumped generator} G= \begin{pmatrix} -4\alpha & \alpha & 0 & \alpha & 0 &\alpha & 0 & \alpha & 0\\ \alpha & -3\alpha & \alpha & 0 & 0 & 0& 0 & 0 & \alpha\\ 0& \alpha & -2\alpha & \alpha & 0 & 0 & 0 & 0 & 0\\ \alpha & 0 &\alpha &-3\alpha & \alpha & 0 & 0 &0 & 0\\ 0 & 0 & 0 & \alpha & -2\alpha & \alpha & 0 & 0 & 0\\ \alpha & 0 & 0 & 0 & \alpha & -3\alpha & \alpha & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & \alpha & -2\alpha & \alpha & 0\\ \alpha & 0 & 0 & 0 & 0 & 0 & \alpha & -3\alpha & \alpha\\ 0& \alpha &0 & 0 & 0 & 0 &0 &\alpha &-2\alpha \end{pmatrix} \end{equation}
(12)
where all \(\alpha \geq 0\) is the transition rate. This matrix corresponds to an irreducible chain because it is always possible to go from one state to another (see [13] for further details).
We now compute the probability matrix \(P(t)\) defined by Equation (3) by using the Chapman-Kolmogorov backward equation (see Equation (5)).
A direct computation of Equation (5) will be tedious because of the size of Equation (12). We therefore lump the symmetric states as depicted on Figure 2. This figure shows that the symmetric group \(S_9\) is a partition of the proposed Markov chain.

Figure 2. Lumpability of small square tiling

The original Markov chain is lumped as \textcolor{magenta}{(1-1)}\textcolor{cyan}{(2-1 2-3 2-5 2-7)}\textcolor{red}{(2-2 2-4 2-6 2-8)}.
The new infinitesimal generator is obtained from
\begin{equation}\label{xxxx} \left\lbrace \begin{array}{ccl} g'_{11}&=&g_{11},\\ g'_{12}&=&g_{12}+g_{14}+g_{16}+g_{18},\\ g'_{13}&=&g'_{13} + g_{15}+g_{17}+g_{19},\\ g'_{21}&=&g_{21},\\ g'_{22}&=&g_{22},\\ g'_{23}&=&g_{23}+g_{29},\\ g'_{31}&=&g_{31},\\ g'_{32}&=&g_{32}+g_{34},\\ g'_{33}&=&g_{33}.\\ \end{array} \right. \end{equation}
(13)
The original Markov chain and the new infinitesimal generator satisfy all the hypotheses of the Definitions 4 and 6. Substituting each parameter by its value in the Equation (13), we get the new infinitesimal generator,
\begin{equation}\label{lupmed small square generator} G'=\begin{pmatrix} -4\alpha &4\alpha & 0\\ \alpha & -3\alpha & 2\alpha \\ 0 & 2\alpha & -2\alpha \end{pmatrix}. \end{equation}
(14)
Substituting Equation (14) into Equation (5), we get for \begin{equation*} P(t)= \begin{pmatrix} p_{11}& p_{12} & p_{13} \\ p_{21}& p_{22} & p_{23} \\ p_{31}& p_{32} & p_{33} \end{pmatrix}, \end{equation*} where all \(p_{ij}\) (\(i,j\in\{1,2,3\}\)) are functions depending on the same variable \(t\)); the following system:
\begin{equation} \begin{pmatrix} p'_{11}& p'_{12} & p'_{13} \\ p'_{21}& p'_{2}2 & p'_{23} \\ p'_{31}& p'_{32} & p'_{33} \end{pmatrix} =\begin{pmatrix} -4\alpha &4\alpha & 0\\ \alpha & -3\alpha & 2\alpha \\ 0 & 2\alpha & -2\alpha \end{pmatrix} \begin{pmatrix} p_{11}& p_{12} & p_{13} \\ p_{21}& p_{2}2 & p_{23} \\ p_{31}& p_{32} & p_{p_{33}} \end{pmatrix}, \end{equation}
(15)
where \(p'_{ij}\) indicates the derivative of \(p_{ij}\) (\(i,j\in \{1,2,3\}\)). The multiplication of the right part of the equality yields:
\begin{equation} \left\lbrace \begin{array}{ccc} p'_{11} &= & 4\alpha (p_{21} - p_{11}) ,\\ p'_{21}&= & \alpha( p_{11} -3p_{21} +2p_{31}),\\ p'_{31}&= & 2\alpha( p_{21} -p_{31}),\\ p'_{12}&= &4\alpha (p_{22} - p_{12}) ,\\ p'_{22}&= & \alpha (p_{12}-3 p_{22}+2p_{32}), \\ p'_{32}& = & 2\alpha (p_{22}-2 p_{32}), \\ p'_{13}&= & 4\alpha( p_{23} +p_{13}),\\ p'_{23}& = & \alpha (p_{13} -3 p_{23}+2 p_{33}),\\ p'_{33}&= & 2\alpha( p_{23} -p_{33}). \end{array} \right. \end{equation}
(16)
This system is made of equivalent equations. Thus, instead of solving the whole system, we just solve one of the systems with three equations. We can either solve
\begin{equation}\label{SmallSquareTilingSystem1Part1} \left\lbrace \begin{array}{ccc} p'_{11} &= & 4\alpha (p_{21} - p_{11}), \\ p'_{21}&= & \alpha( p_{11} -3p_{21} +2p_{31}),\\ p'_{31}&= & 2\alpha( p_{21} -p_{31}), \end{array} \right. \end{equation}
(17)
or
\begin{equation}\label{SmallSquareTilingSystem1Part2} \left\lbrace \begin{array}{ccc} p'_{12}&= &4\alpha (p_{22} - p_{12}),\\ p'_{22}&= & \alpha (p_{12}-3 p_{22}+2p_{32}), \\ p'_{32}& = & 2\alpha (p_{22}-2 p_{32}), \end{array} \right. \end{equation}
(18)
or again
\begin{equation}\label{SmallSquareTilingSystem1Part3} \left\lbrace \begin{array}{ccc} p'_{13}&= & 4\alpha( p_{23} +p_{13}),\\ p'_{23}& = & \alpha (p_{13} -3 p_{23}+2 p_{33}),\\ p'_{33}&= & 2\alpha( p_{23} -p_{33}). \end{array} \right. \end{equation}
(19)
Algebraic computations show that the matrix associated to any of the subsystems (i.e. Equation (17), Equation (18), and Equation (19) ) has three eigenvalues : \(\lambda_1 =-6\alpha,~~ \lambda_2 = -3\alpha\) and \(\lambda_3 =0\) and the corresponding eigenvectors: $$v_1 =\begin{pmatrix} 1\\ \frac{-1}{2}\\ \frac{1}{4} \end{pmatrix},~~~ v_2 =\begin{pmatrix} 1\\ \frac{1}{4}\\ \frac{-1}{2} \end{pmatrix}, ~~~ v_3 =\begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}.$$ The general solution of each subsystem can be written as
\begin{equation} p_{ij}=c_1 v_1e^{\lambda_1 t} + c_2 v_2 e^{\lambda_2 t} + c_3 v_3e^{\lambda_3 t}, \end{equation}
(20)
where \(c_i\) are constants. We then have successively
\begin{equation} \begin{pmatrix} p_{11}\\ p_{21}\\ p_{31} \end{pmatrix} = c_1 \begin{pmatrix} 1\\ \frac{-1}{2}\\ \frac{1}{4} \end{pmatrix} e^{-6\alpha t}+c_2 \begin{pmatrix} 1\\ \frac{1}{4}\\ \frac{-1}{2} \end{pmatrix}e^{-3\alpha t} +c_3 \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}, \end{equation}
(21)
\begin{equation} \begin{pmatrix} p_{12}\\ p_{22}\\ p_{32} \end{pmatrix} = c'_1 \begin{pmatrix} 1\\ \frac{-1}{2}\\ \frac{1}{4} \end{pmatrix} e^{-6\alpha t}+c'_2 \begin{pmatrix} 1\\ \frac{1}{4}\\ \frac{-1}{2} \end{pmatrix}e^{-3\alpha t} +c'_3 \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}, \end{equation}
(22)
\begin{equation} \begin{pmatrix} p_{13}\\ p_{23}\\ p_{33} \end{pmatrix} = c''_1 \begin{pmatrix} 1\\ \frac{-1}{2}\\ \frac{1}{4} \end{pmatrix} e^{-6\alpha t}+c''_2 \begin{pmatrix} 1\\ \frac{1}{4}\\ \frac{-1}{2} \end{pmatrix}e^{-3\alpha t} +c''_3 \begin{pmatrix} 1\\ 1\\ 1 \end{pmatrix}. \end{equation}
(23)
Since \(P(0)=I_3\), after substitution and computations, we get
\begin{equation}\label{Solution matrix expsmallsquare} P(t)=\left\lbrace \begin{array}{ccc} p_{11} &= & \frac{4}{9}(e^{-6\alpha t}+ e^{-3\alpha t} +\frac{1}{4}),\\ p_{21}&= & \frac{-1}{9}(2e^{-6\alpha t}-e^{-3\alpha t} -1),\\ p_{31}&= & \frac{1}{9}(e^{-6\alpha t}-2e^{-3\alpha t} +1),\\ p_{12}&= &\frac{-4}{9}(2e^{-6\alpha t}-e^{-3\alpha t} -1),\\ p_{22}&= & \frac{4}{9}(e^{-6\alpha t}+\frac{1}{4}e^{-3\alpha t} +1),\\ p_{32}& = & \frac{-2}{9}(e^{-6\alpha t}+e^{-3\alpha t} -2),\\ p_{13}&= & \frac{4}{9}(e^{-6\alpha t}-2e^{-3\alpha t} +1),\\ p_{23}& = & \frac{-2}{9}(e^{-6\alpha t}+e^{-3\alpha t} -2),\\ p_{33}&= & \frac{4}{9}(\frac{1}{4}e^{-6\alpha t}+e^{-3\alpha t} +1).\\ \end{array} \right. \end{equation}
(24)
%Equation \ref{Solution matrix expsmallsquare} can be found by computing directly the exponential of the product of \(t\) and the infinitesimal generator \(G\).
3.1.2 Stationary distribution and limiting probability
A stationary distribution of a Markov chain is a probability distribution that remains unchanged in the Markov chain as time progresses. Typically, it is represented as a row vector \(\pi\) whose entries are probabilities summing to \(1\) and, given the transition matrix \(P\), it satisfies the Equation (4). It can be shown (see [14])that the Equation (4) is equivalent to
\begin{equation} \label{StationaryDistribution with Inf Generator} \pi G = 0 \end{equation}
(25)
with \(G\) the infinitesimal generator of the chain. Considering \(\pi\), the stationary distribution associated to the lumped chain above reduces the relation Equation (25) as
\begin{equation} \label{yyy} \begin{pmatrix} \pi_1 & \pi_2 & \pi_3 \end{pmatrix} \begin{pmatrix} -4 \alpha & 4\alpha & 0\\ \alpha & -3 \alpha & 2\alpha \\ 0 & 2 \alpha & -2\alpha \end{pmatrix} = (0,0,0). \end{equation}
(26)
Equation (26) together with \(\sum\limits_{i} \pi_i =1\) yields
\begin{equation} \label{zzz} \left\lbrace \begin{array}{cc} -4\alpha\pi_1 +\alpha\pi_2 & = 0,\\ 4\alpha \pi_1 -3\alpha\pi_2 +2\alpha\pi_3 & = 0,\\ 2\alpha\pi_2 -2\alpha\pi_3 &=0,\\ \pi_1 + \pi_2 + \pi_3 & = 1. \end{array} \right. \end{equation}
(27)
Solving this system (Equation (27)), we find the stationary distribution \(\pi =\left(\frac{1}{9},\frac{4}{9},\frac{4}{9}\right).\) This stationary distribution found is exactly the same as the one associated with the original chain, i.e. non-lumped system. Another parameter relating to stationary distribution is the limiting distribution.

Definition 8. The limiting distribution of a Markov chain seeks to describe how the process behaves a long time after.

For the limiting distribution to exist, the following limit must exist for any states \(i\) and \(j\)
\begin{equation} \label{Limiting probability equation} \lim_{n\longrightarrow\infty} \mathbb{P}(X_n =j \mid X_0 =i). \end{equation}
(28)
Furthermore, for any state \(i\), the following sum must be \(1\).
\begin{equation}\label{limitingtime} \sum_{states~ j}\lim_{n\longrightarrow\infty} \mathbb{P}(X_n =j \mid X_0 =i)=1. \end{equation}
(29)
This ensures that the numbers obtained do, in fact, constitute a probability distribution. Provided these two conditions are met, then the limiting distribution of a Markov chain with \(X_0 =i\) is the probability distribution given by \(l=(L_{ij})_{states~j}\). For any time-homogeneous Markov chain that is aperiodic and irreducible, \(\lim_{n\longrightarrow\infty} \mathbf{P}^n\) converges to a matrix with all rows identical and equal to \(\pi\).
For time-homogeneous Markov chains, any limiting distribution is a stationary distribution [15]. The relation Equation (28) applied on the matrix Equation (24) provides the following matrix: $$ \mathbf{P}_{\pi}=\begin{pmatrix} \frac{1}{9}&\frac{4}{9} & \frac{4}{9} \\ \frac{1}{9}&\frac{4}{9} & \frac{4}{9} \\ \frac{1}{9}& \frac{4}{9} & \frac{4}{9} \end{pmatrix}, $$ which is the limiting distribution of the Markov chain deduced from the square tiling. It is a stochastic matrix. It satisfies the Equation (29) as expected.
3.1.3. Calculation of the mean hitting times
In this section, we want to compute the mean value of the time to be spent by the molecule in a cell for the first time by using the Equation (11). For \(A=\{1\}\), we have the following system: \begin{equation*} \left\lbrace \begin{array}{rrr} k_1^A &=&0,\\ g_{21}k_1^A +g_{22}k_2^A +g_{23}k_3^A &= &-1,\\ g_{31}k_1^A +g_{32}k_2^A +g_{33}k_3^A &=&-1,\\ \end{array} \right. \end{equation*} whose solution is \( \begin{pmatrix} 0 \\ \\ \frac{2}{\alpha} \\ \\ \frac{5}{2\alpha} \end{pmatrix} \) after substituting all the \(g_{ij}\) with their corresponding values in Equation (14). In the same way, we respectively have for \(A=\{2\}\) and \(A=\{3\}\) the following vectors: \( \begin{pmatrix} \frac{1}{4\alpha} \\ \\ 0 \\ \\ \frac{1}{2\alpha} \end{pmatrix} \) and \( \begin{pmatrix} \frac{7}{8\alpha} \\ \\ \frac{5}{8\alpha} \\ \\ 0 \end{pmatrix}. \) The undermentioned matrix H summarizes the findings for the mean hitting times:
\begin{equation} H= \begin{pmatrix} 0 & \frac{1}{4\alpha} & \frac{7}{8\alpha} \\ & & \\ \frac{2}{\alpha} & 0 & \frac{5}{8\alpha} \\ & & \\ \frac{5}{2\alpha} & \frac{1}{2\alpha} & 0 \end{pmatrix}. \label{mean hitting time square} \end{equation}
(30)

3.1.5. R simulation of effect of probability and time on the movement in square tiling

Figure 3 and 4 represent the transition rate against time. The Figure 3, in particular, shows how the variation in the transition rates affects the attainment of the equilibrium. By comparing graph 3a and graph 3c, we can see that the variation in the \(\alpha\) parameter value affects the oscillation of the state curves. This means that the variation of the transition rates influences the attainment of the equilibrium. We can notice that on the graph 3c where the probability value is the smallest, the equilibrium state is reached quicker than on the other two graphs of the same figure.

Figure 3. Visualization of the effect of the variation of the transition rates for a fixed time (time=50) in a small square tiling

On the other hand, Figure 4, also represents the curve behavior in time variation for a fixed value of the transition rates. Hence, by comparing Figure 4a and Figure 4c we find that the slope of the state curves reaches stability at almost the fifteenth unit of time. This explains why, considering a larger time interval, the equilibrium status seems to be reached very early.
For example, if we choose the second as unit of time, we can note that from graph Figure 4a starts the equilibrium phase almost at the eighth second. Considering a larger interval of time (as 100 at Figure 4b or Figure 4c), the equilibrium attainment time is still the fifteenth second. It can be seen that the starting state curve is less steep on Figure 4a where the time is 30 than in graph 4c where the time is 150.

Figure 4. Visualization of the effect of the variation of time for a fixed transition rate (\(\alpha=\frac{1}{8}\)) in a square tiling

From these two panels of graphs, we see that the fastness in the attainment of the equilibrium is not dictated by the duration of the movement but by the value of the probability (transition rates).

4. Continuous-time process in small hexagonal tiling

In this section, we examine how some parameters influence the behavior of the motion in the hexagonal tiling. We will consider the aggregated complex cell for reducing the computations. The unique starting position is the central cell 1-1.

4.1. Infinitesimal generator and probability matrix

Let us consider the small hexagon depicted on Equation (5). From the central cell, the molecule (the system) has six possible equiprobable destinations which are its neighboring cells. Based on this information, we then produce the following infinitesimal generator

Figure 5. Small hexagonal tiling

\begin{equation}\label{Generator small hexagonal tiling} G=\begin{pmatrix} -6\alpha& \alpha &\alpha & \alpha & \alpha & \alpha & \alpha \\ \alpha & -3\alpha & \alpha & 0 & 0& 0 & \alpha\\ \alpha &\alpha & -3\alpha &\alpha & 0& 0 & 0\\ \alpha & 0 & \alpha & -3\alpha & \alpha & 0 & 0\\ \alpha & 0 & 0 &\alpha & -3\alpha & \alpha & 0 \\ \alpha & 0 &0 & 0 & \alpha &-3\alpha & \alpha \\ \alpha & \alpha & 0 & 0& 0 & \alpha & -3\alpha \end{pmatrix} \end{equation}
(31)
On Figure 5 we have two kinds of equivalent cells: The central cell and the surrounding ones. Thus, we can make a partition of the chain in two states instead of seven. The Figure 6 summarizes what happens exactly in lumping the equivalent cells.

Figure 6. Lumpability of states in small hexagonal tiling

The new infinitesimal generator may be written in the following way:
\begin{equation}\label{New small hexagonal inf. gen} G' =\begin{pmatrix} -6\alpha & 6\alpha \\ \alpha & -\alpha \end{pmatrix}. \end{equation}
(32)
The probability matrix \(P(t)\) is solution to the Kolmogorov Equation (5) and can be written as
\begin{equation} P(t) =e^{Gt} = \begin{pmatrix} \frac{1}{7}(1+6e^{-7\alpha t})& \frac{6}{7}(1-e^{-7\alpha t}) \\ \frac{1}{7}(1-e^{-7\alpha t})& \frac{1}{7}(6+e^{-7\alpha t}) \end{pmatrix}. \end{equation}
(33)

4.2. Stationary distribution and limiting probability

The stationary distribution of the lumped chain is the vector \(\pi(\pi_1, \pi_2)\) such that
\begin{equation} \pi G' =0. \end{equation}
(34)
Doing necessary substitution, we get: $$ \begin{pmatrix} \pi_1 & \pi_2 \end{pmatrix} \begin{pmatrix} -6\alpha & 6\alpha \\ \alpha & -\alpha \end{pmatrix} = \left\lbrace \begin{array}{cc} -6\alpha\pi_1 +\alpha \pi_2 &=0, \\ 6\alpha\pi_1 -\alpha\pi_2 & = 0. \end{array} \right. $$ This relation together with \(\pi_1 + \pi_2 = 1\) yields \(\pi =\left(\frac{1}{7}, \frac{6}{7}\right).\) The second component of the stationary distribution is made up of the sum of the stationary probabilities of all the six aggregated states. To compute the limiting distribution, we are going to use again the formula given in Equation (28). We then have \(\lim\limits_{t\rightarrow \infty} P(t)=\begin{pmatrix} \frac{1}{7}& \frac{6}{7} \\ \frac{1}{7}& \frac{6}{7} \end{pmatrix} \) as expected.

4.3. Calculation of the mean hitting time

It is easy to check that the matrix of the mean hitting time for the movement of the particle in the hexagonal tiling is
\begin{equation} H=\begin{pmatrix} 0 & \frac{1}{6\alpha}\\ \frac{1}{\alpha} & 0 \end{pmatrix}. \label{eq.mean hitting hexagonal} \end{equation}
(35)

4.4. Simulation of the effects of probability and time on the movement

Figure 7 and Figure 8 plot the impact of the probability and the time on the attainment of the equilibrium in a hexagonal tiling when we consider continuous time.

Figure 7. Simulation of effect of variation of time on the attainment of the equilibrium for fixed probability(transition rate \(\alpha=\frac{1}{8}\))

Figure 8. Simulation of effect of variation of probability on the attainment of the equilibrium for fixed time

The collection of graphs illustrated on Figure 7 and Figure 8 depicts how fast the molecule reaches the equilibrium in the hexagonal tiling in continuous time. The main factor which affects the attainment of the equilibrium is the transition rate. The transition rate \(\alpha=\frac{1}{6}\) is a critical value which affects particularly the motion of the molecule in the hexagonal tiling. For this value, if the molecule quits the central cell, it will never come back into it. A quick substitution in Equation (35) and a glance on Figure 8b can allow to verify it.

5. Discussion of results and conclusion

Under continuous-time conditions, we have checked the same results. In fact, Equation (30) and Equation (35) show that the average transition time from state 1 to state 2 is greater in the square tiling than in the hexagonal tiling. A glance at the panels of graphs depicted above shows that the greater the probability (transition rate), the later the equilibrium is reached in both square and hexagonal tilings. However, when comparing the movement in both tilings, we realize that the equilibrium is quickly reached in hexagonal tiling than in the square one. Increasing the value of the transition rate leads to a quick or late attainment of the equilibrium.

In this paper, the movement of a molecule in two kinds of tilings has been studied: the square tiling and the hexagonal one. Its has been established that only two parameters, among the four considered, have an impact on the quick or late attainment of the equilibrium. The parameters under consideration in this study were the nature of the time (discrete or continuous), the probability (so called transition rate), the time and the shape of the tiling.

In [1], the movements of the molecules in the tilings were modelled using discrete-time Markov chains. It was established that this motion reaches the equilibrium point faster in the hexagonal tiling than in the square one. This same finding is established in continuous-time Markov chains. It is to be deduced that the nature of time does not have an impact on reaching the equilibrium point.However, the shape of the tiling is a core parameter for the attainment of the equilibrium. That is, the molecule is faster in hexagonal tiling than in the square one.

Another important parameter is the transition rate in the infinitesimal generator. During this study, it has been demonstrated that for both hexagonal and square tilings, the rapidity to attain the equilibrium depends also upon the transition rate under consideration. Hence, the smaller the transition rate, the faster the molecule is, in reaching the equilibrium position and vice versa. To put it in a nutshell, this study has proven the influence of transition rate and the shape of the tiling were important for the rapidity of the movement. Other parameters do not have considerable impact on the movement.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Saiguran, M., Ring, A., & Ibrahim, A. (2019). Evaluation of Markov chains to describe movements on tiling. Open Journal of Mathematical Sciences, 3(1), 358-381.[Google Scholor]
  2. Brémaud, P. (2009). Initiation aux Probabilités: et aux chaînes de Markov. Springer Science & Business Media.[Google Scholor]
  3. Ring, A. (2004). State symmetries in matrices and vectors on finite state spaces. arXiv preprint math/0409264.[Google Scholor]
  4. Barr, D. R., & Thomas, M. U. (1977). An eigenvector condition for Markov chain lumpability. Operations Research, 25(6), 1028-1031.[Google Scholor]
  5. Benec, V. E. (1978). Reduction of network states under symmetries. Bell System Technical Journal, 57(1), 111-149.[Google Scholor]
  6. Nagle, R. K., Saff, E. B., Snider, A. D., & West, B. (1996). Fundamentals of differential equations and boundary value problems. Reading: Addison-Wesley, Pearson.[Google Scholor]
  7. Whitt, W. (2013). Continuous-time Markov chains. Department of Industrial Engineering and Operations Research, Columbia University, New York, December 2013.[Google Scholor]
  8. Kemeny, J. G., & Snell, J. L. (1976). Finite markov chains. Undergraduate Texts in Mathematics.[Google Scholor]
  9. Tian, J. P., & Kannan, D. (2006). Lumpability and commutativity of Markov processes. Stochastic analysis and Applications, 24(3), 685-702.[Google Scholor]
  10. Yin, G. G., & Zhang, Q. (2012). Continuous-time Markov chains and applications: a two-time-scale approach (Vol. 37). Springer Science & Business Media, New York.[Google Scholor]
  11. Cameron, M., & Gan, T. (2016). A graph-algorithmic approach for the study of metastability in markov chains. arXiv preprint arXiv:1607.00078.[Google Scholor]
  12. https://en.wikipedia.org/wiki/Dihedral\_group, 20/01/2020
  13. Meyn, S. P., & Tweedie, R. L. (2012). Markov chains and stochastic stability. Springer Science & Business Media.[Google Scholor]
  14. Levin, D. A., & Peres, Y. (2017). Markov chains and mixing times (Vol. 107). American Mathematical Socity.[Google Scholor]
  15. Schuette, C., & Metzner, P. (2009). Markov chains and Jump Processes. An Introduction to Markov and Jump Processes on Countable States Spaces. Freie Universit Berlin, Berlin.[Google Scholor]
]]>
Exponential growth of solution with \(L_p\)-norm for class of non-linear viscoelastic wave equation with distributed delay term for large initial data https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/exponential-growth-of-solution-with-l_p-norm-for-class-of-non-linear-viscoelastic-wave-equation-with-distributed-delay-term-for-large-initial-data/ Mon, 22 Jun 2020 16:00:28 +0000 https://old.pisrt.org/?p=4196
OMA-Vol. 4 (2020), Issue 1, pp. 76 - 83 Open Access Full-Text PDF
Abdelbaki Choucha, Djamel Ouchenane, Khaled Zennir
Abstract: In this work, we are concerned with a problem for a viscoelastic wave equation with strong damping, nonlinear source and distributed delay terms. We show the exponential growth of solution with \(L_{p}\)-norm, i.e., \(\lim\limits_{t\rightarrow \infty}\Vert u\Vert_p^p \rightarrow \infty\).
]]>

Open Journal of Mathematical Analysis

Exponential growth of solution with \(L_p\)-norm for class of non-linear viscoelastic wave equation with distributed delay term for large initial data

Abdelbaki Choucha\(^1\), Djamel Ouchenane, Khaled Zennir
Department of Mathematics, Faculty of Exact Sciences, University of El Oued, B.P. 789, El Oued 39000, Algeria.; (A.C)
Laboratory of pure and applied Mathematics, Amar Teledji Laghouat University, Algeria.; (D.O)
Department of Mathematics, College of Sciences and Arts, Qassim University, Ar-Rass, Saudi Arabia.; (K.Z)
\(^1\)Corresponding Author: abdelbaki.choucha@gmail.com

Abstract

In this work, we are concerned with a problem for a viscoelastic wave equation with strong damping, nonlinear source and distributed delay terms. We show the exponential growth of solution with \(L_{p}\)-norm, i.e., \(\lim\limits_{t\rightarrow \infty}\Vert u\Vert_p^p \rightarrow \infty\).

Keywords:

Strong damping, viscoelasticity, nonlinear source, exponential growth, distributed delay.

1. Introduction

The well known "Growth" phenomenon is one of the most important phenomena of asymptotic behavior, where many researches omit from its study especially when it comes from the evolution problems. It gives us very important information to know the behavior of equation when time arrives at infinity, it differs from global existence and blow up in both mathematically and in applications point of view. Although the interest of the scientific community for the study of delayed problems is fairly recent, multiple techniques have already been explored in depth.

In this direction, we are concerned with the delayed damped system

\begin{equation} \left\{ \begin{array}{l} u_{tt}-\Delta u-\omega\Delta u_{t}+\displaystyle\int_{0}^{t }\varpi(t-q) \Delta u(q) dq\\ +\mu _{1}u_{t} +\displaystyle\int_{\tau _{1}}^{\tau _{2}}\vert\mu_{2} (q)\vert u_{t}(x, t-q) dq=b\vert u\vert ^{p-2}.u, \ x\in \Omega, t>0, \\ u\left( x, t\right) =0, x\in \partial \Omega, \\ u_{t}\left( x, -t \right) =f_{0}\left( x, t \right), \ (x, t)\in \Omega\times\left( 0, \tau_{2} \right), \\ u\left( x, 0\right) =u_{0}\left( x\right), u_{t}\left( x, 0\right) =u_{1}\left( x\right), x\in \Omega,% \end{array}% \right. \label{system1} \end{equation}
(1)
where \(\omega,b, \mu_{1}\) are positive constants, \(p>2\) and \(\tau_{1}, \tau_{2}\) are the time delay with \(0\leq\tau_{1}< \tau_{2}\) and \(\mu_{2}\) is bounded function and \(\varpi\) is a differentiable function.

It is well known that viscous materials are the opposite of elastic materials which have the capacity to store and dissipate mechanical energy. As the mechanical properties of these viscous substances are of great importance when they appear in many applications of other applied sciences. Many searchers have paid attention to this problem.

In the absence of the strong damping \(\Delta u_{t}\), that is for \(w=0\) and in absence of the distributed delay term. Our problem (1) has been investigated by many authors and results on the local/global existence, and stability have been established. See for example [1, 2, 3, 4]. In [5], the authors looked into the following system

\begin{equation} u_{tt}-\Delta u+\displaystyle\int_{0}^{t }\varpi(t-s) \Delta u(s) ds+a(x)u_{t}+\vert u\vert ^{\gamma}u=0, \end{equation}
(2)
where decay result of an exponential rate was showed.

In [6], Song and Xue considered with the following viscoelastic equation with strong damping:
\begin{equation} \left\{ \begin{array}{l} u_{tt}-\Delta u+\displaystyle\int_{0}^{\infty }g(t-s) \Delta u(s) ds-\Delta u_{t} =\vert u\vert ^{p-2}.u, \ x\in \Omega, t>0, \\ u\left( x, 0\right) =u_{0}\left( x\right), u_{t}\left( x, 0\right) =u_{1}\left( x\right).\label{p1.1} \end{array} \right. \end{equation}
(3)
The authors showed, under suitable conditions on \(g\), that there were solutions of (3) with arbitrarily high initial energy that blow up in a finite time. For the same Problem (3), in [7], Song and Zhong showed that there were solutions of (3) with positive initial energy that blew up in finite time. In [8], Zennir considered with the following viscoelastic equation with strong damping:
\begin{equation} \left\{ \begin{array}{l} u_{tt}-\Delta u-\omega\Delta u_{t}+\displaystyle\int_{0}^{t }g(t-s) \Delta u(s) ds\\ +a\vert u_{t}\vert ^{m-2}.u_{t} =\vert u\vert ^{p-2}.u, \ x\in \Omega, t>0, \\ u\left( x, 0\right) =u_{0}\left( x\right), u_{t}\left( x, 0\right) =u_{1}\left( x\right), \ x\in \Omega\\ u(x, t)=0, \ x\in \partial\Omega. \end{array}%\label{p1.1} \right. \end{equation}
(4)
They proved the exponential growth result under suitable assumptions.
In [9] the authors considered the following problem for a nonlinear viscoelastic wave equation with strong damping, nonlinear damping and source terms
\begin{equation} \left\{ \begin{array}{l} u_{tt}-\Delta u+\int_{0}^{\infty }g(s) \Delta u(t-s) ds-\varepsilon_{1}\Delta u_{t}+\varepsilon_{2}u_{t}\vert u_{t}\vert ^{m-2}=\varepsilon_{3}u\vert u\vert ^{p-2}, x\in \Omega, t>0, \\ u(x, t)=0, \ x\in \partial\Omega, t>0\\ u\left( x, 0\right) =u_{0}\left( x\right), u_{t}\left( x, 0\right) =u_{1}\left( x\right), \ x\in \Omega.% \end{array}% \right. \label{system8} \end{equation}
(5)
They proved a blow up result if \(p>m\) and established the global existence.
In this article, we investigated Problem (1), in which all the damping mechanism have been considered in the same time, these assumptions make our problem different form those studied in the literature, specially the Exponential Growth of solutions. We will prove that if the initial energy \(E(0)\) of our solutions is negative (this means that our initial data are large enough), then our local solutions in bounded and
\begin{equation} \Vert u\Vert_{p}^{p}\rightarrow\infty, \end{equation}
(6)
as \(t\) tends to \(\infty\), used idea in [10, 11, 12, 13].
Our aim in the present work is to extend the existing exponential growth results to strong damping for a viscoelastic problem with distributed delay under the following assumptions:
  • (A1) \(\varpi \in(\mathbb{R}_{+}, \mathbb{R}_{+})\) is decreasing function so that
    \begin{equation} \varpi(t)\geq0\hspace{0.3cm}, \ 1-\int_{0}^{\infty }\varpi\left( q\right) dq=l>0. \label{A1} \end{equation}
    (7)
  • (A2) There exists a constant \(\xi>0\) such that
    \begin{equation} \varpi^{\prime }\left( t\right) \leq -\xi \varpi\left( t\right) \hspace{0.3cm}, \ t\geq 0. \label{A2} \end{equation}
    (8)
  • (A3) \(\mu _{2}:[\tau_{1}, \tau_{2}]\rightarrow\mathbb{R}\) is bounded function so that
    \begin{equation} \Big(\frac{2\delta-1}{2}\Big)\int_{\tau_{1}}^{\tau_{2}}\vert\mu _{2}(q)\vert dq\leq\mu _{1}\hspace{0.3cm}, \ \delta>\frac{1}{2}.\label{A3} \end{equation}
    (9)

2. Main results

First, as in [14], we introduce the new variable \begin{equation*} y(x, \rho, q, t)=u_{t}(x, t-q\rho), \end{equation*} then we obtain
\begin{equation} \left\{ \begin{array}{l} qy_{t}(x, \rho, q, t)+y_{\rho}(x, \rho, q, t)=0\\ \\ y(x, 0, q, t)=u_{t}(x, t). \end{array}% \right. \label{e1.1} \end{equation}
(10)
Let us denote by
\begin{equation} (\varpi o u)=\int_{\Omega}\int_{0}^{t}\varpi(t-q)\vert u(t)-u(q)\vert^{2}dq. \label{e1.3} \end{equation}
(11)
Therefore, Problem (1) takes the form
\begin{equation} \left\{ \begin{array}{l} u_{tt}-\Delta u-\omega\Delta u_{t}+\displaystyle\int_{0}^{t}\varpi(t-q) \Delta u(q) dq +\mu _{1}u_{t} +\displaystyle\int_{\tau _{1}}^{\tau _{2}}\vert\mu_{2} (q)\vert y(x, 1, q, t) dq=b\vert u\vert ^{p-2}u, \ x\in \Omega, t>0, \\ qy_{t}(x, \rho, q, t)+y_{\rho}(x, \rho, q, t)=0, \end{array}% \right. \label{sys1.1} \end{equation}
(12)
% with initial and boundary conditions
\begin{equation} \left\{ \begin{array}{l} u( x, t) =0, \ x\in \partial \Omega, \\ y( x, \rho, q, 0) =f_{0}\left( x, q\rho \right), \\ u\left( x, 0\right) =u_{0}\left( x\right), u_{t}\left( x, 0\right) =u_{1}\left( x\right), \end{array}% \right. \label{sys1.2} \end{equation}
(13)
% where $$(x, \rho, q, t)\in \Omega\times(0, 1)\times (\tau_{1}, \tau_{2})\times(0, \infty).$$ We state without proof the local existence theorem that can be established by combining arguments of [15].

Theorem 1. Assume (7), (8) and (9) holds. Let

\begin{equation} \begin{cases} 2< p< \dfrac{2n-2}{n-2}, &n\geq 3;\\ p\geq 2, & n=1,2. \end{cases}\label{a1.1} \end{equation}
(14)
Then for any initial data $$(u_{0},u_{1},f_{0})\in \mathcal{H}\hspace{0.2cm}/\hspace{0.2cm}\mathcal{H}= H^{1}_{0}(\Omega)\times H^{1}_{0}(\Omega)\times L^{2}(\Omega\times(0, 1)\times(\tau_{1}, \tau_{2})),$$ with compact support, Problem (13) has a unique solution $$u\in C([0, T]; \mathcal{H}),$$ for some \(T>0\).

In the next theorem we give the global existence result, its proof based on the potential well depth method in which the concept of so-called stable set appears, where we show that if we restrict our initial data in the stable set, then our local solution obtained is global in time. One can make use of arguments in [16].

Theorem 2. Suppose that (7), (8), (9) and (14) hold. If \(u_{0}\in W\), \(u_{1}\in H^{1}_{0}(\Omega)\) and

\begin{equation} \frac{bC_{*}^{p}}{l}\Big(\frac{2p}{(p-2)l}E(0)\Big)^{\frac{p-2}{2}}< 1, \label{m1.2} \end{equation}
(15)
where \(C_{*}\) is the best Poincare's constant. Then the local solution \(u(t, x)\) is global in time.

Lemma 1. Assume (7), (8), (9) and (14) hold, let \(u(t)\) be a solution of (12), then \(\mathcal{E}(t)\) is non-increasing, that is

\begin{align} \mathcal{E}(t)&=\frac{1}{2}\Vert u_{t}\Vert_{2}^{2}+\frac{1}{2}\Big(1-\int_{0}^{t}\varpi(q)dq\Big)\Vert \nabla u\Vert_{2}^{2}+\frac{1}{2}(\varpi o\nabla u)+\frac{1}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx-\frac{b}{p}\Vert u\Vert_{p}^{p}.\notag\\&\label{sys2.1} \end{align}
(16)
satisfies
\begin{equation} \mathcal{E}(t)\leq -c_{1}\Big(\Vert u_{t}\Vert_{2}^{2}+\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dqdx\Big).\label{sys2.2} \end{equation}
(17)

Proof. By multiplying the Equation (12)\(_{1}\) by \(u_{t}\) and integrating over \(\Omega\), we get

\begin{eqnarray} &&\frac{d}{dt}\Big\{\frac{1}{2}\Vert u_{t}\Vert_{2}^{2}+\frac{1}{2}\Big(1-\int_{0}^{t}\varpi(q)dq\Big)\Vert \nabla u\Vert_{2}^{2}+\frac{1}{2}(\varpi o\nabla u)-\frac{b}{p}\Vert u\Vert_{p}^{p}\Big\}\notag\\ &&=-\mu_{1}\Vert u_{t}\Vert_{2}^{2} -\int_{\Omega}u_{t}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y(x, 1, q, t)dqdx+\frac{1}{2}(\varpi'o\nabla u)-\frac{1}{2}\varpi(t)\Vert \nabla u\Vert_{2}^{2}-\omega\Vert \nabla u_{t}\Vert_{2}^{2},\label{sys2.3} \end{eqnarray}
(18)
and, we have
\begin{eqnarray} &&\frac{d}{dt}\frac{1}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx=-\frac{1}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}2\vert \mu_{2}(q)\vert yy_{\rho}dqd\rho dx\notag\\ &&=\frac{1}{2}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 0, q, t)dqdx-\frac{1}{2}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dq dx\notag\\ & &=\frac{1}{2}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)dq\Big)\Vert u_{t}\Vert_{2}^{2}-\frac{1}{2}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dq dx.\label{sys2.5} \end{eqnarray}
(19)
Then, we get
\begin{eqnarray} &&\frac{d}{dt}\mathcal{E}(t)=-\mu_{1}\Vert u_{t}\Vert_{2}^{2}-\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert u_{t}y(x, 1, q, t)dqdx+\frac{1}{2}(\varpi'o\nabla u)\notag\\ &&-\frac{1}{2}\varpi(t)\Vert \nabla u\Vert_{2}^{2}-\omega\Vert \nabla u_{t}\Vert_{2}^{2} +\frac{1}{2}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)dq\Big)\Vert u_{t}\Vert_{2}^{2}-\frac{1}{2}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dq dx.\label{sys2.6} \end{eqnarray}
(20)
By (18) and (19), we get (16). Further using Young's inequality, (7), (8) and (9) in (20), we obtain (17).

Now we are ready to state and prove our main result. For this purpose, we define
\begin{eqnarray}\label{sys2.12} H(t)&=&-\mathcal{E}(t)\\&=&\frac{b}{p}\Vert u\Vert_{p}^{p}-\frac{1}{2}\Vert u_{t}\Vert_{2}^{2}-\frac{1}{2}(1-\int_{0}^{t}\varpi(q)dq)\Vert \nabla u\Vert_{2}^{2}-\frac{1}{2}(\varpi o\nabla u)-\frac{1}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx.\notag \end{eqnarray}
(21)

Theorem 3. Suppose that (7)-(9) and (14). Assume further that \(E(0)< 0\) holds. Then the unique local solution of problem (12) grows exponentially.

Proof. From (16), we have

\begin{equation} \mathcal{E}(t)\leq \mathcal{E}(0)\leq 0. \label{sys2.13} \end{equation}
(22)
Hence
\begin{eqnarray} H'(t)=-\mathcal{E}'(t)&\geq & c_{1}\Big(\Vert u_{t}\Vert_{2}^{2}+\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dqdx\Big)\notag\\ &\geq&c_{1}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dqdx\geq0, \label{sys2.14} \end{eqnarray}
(23)
and
\begin{equation} 0\leq H(0)\leq H(t)\leq \frac{b}{p}\Vert u\Vert^{p}_{p}. \label{sys2.15} \end{equation}
(24)
We set
\begin{equation} \mathcal{K}(t)=H(t)+\varepsilon \int_{\Omega}uu_{t}dx+\frac{\varepsilon\mu_{1}}{2}\int_{\Omega}u^{2}dx+\frac{\varepsilon\omega}{2}\int_{\Omega}(\nabla u)^{2}dx, \label{sys2.16} \end{equation}
(25)
where \(\varepsilon>0\) to be specified later.
Multiplying \((12)_{1}\) by \(u\) and taking derivative of (25), we obtain
\begin{eqnarray} \mathcal{K}'(t)&= &H'(t)+\varepsilon\Vert u_{t}\Vert_{2}^{2}+\varepsilon\int_{\Omega}\nabla u\int^{t}_{0}\varpi(t-q) \nabla u(q)dqdx \notag\\ &&-\varepsilon\Vert \nabla u\Vert_{2}^{2}+\varepsilon b\int_{\Omega}\vert u\vert^{p}dx -\varepsilon\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert uy(x, 1, q, t)dqdx. \label{sys2.18} \end{eqnarray}
(26)
Using
\begin{eqnarray}\label{sys2.19} \varepsilon\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert uy(x, 1, q, t)dqdx&\leq&\varepsilon\Big\{\delta_{1}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert dq\Big)\Vert u\Vert_{2}^{2}+\frac{1}{4\delta_{1}}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dqdx\Big \},\notag\\ \end{eqnarray}
(27)
and
\begin{eqnarray} \varepsilon\int_{0}^{t}\varpi(t-q)dq\int_{\Omega}\nabla u \nabla u(q)dxdq&=&\varepsilon\int_{0}^{t}\varpi(t-q)dq\int_{\Omega}\nabla u (\nabla u(q)-\nabla u(t))dxdq+\varepsilon\int_{0}^{t}\varpi(q)dq\Vert \nabla u\Vert_{2}^{2}\notag\\ &\geq & \frac{\varepsilon}{2}\int_{0}^{t}\varpi(q)dq\Vert \nabla u\Vert_{2}^{2}-\frac{\varepsilon}{2}(\varpi o\nabla u). \label{sys2.20} \end{eqnarray}
(28)
We obtain, from (26),
\begin{eqnarray} \mathcal{K}'(t)&\geq &H'(t)+\varepsilon\Vert u_{t}\Vert_{2}^{2}-\varepsilon\Big(1-\frac{1}{2}\int_{0}^{t}\varpi(q)dq\Big)\Vert \nabla u\Vert_{2}^{2} +\varepsilon b\Vert u\Vert^{p}_{p} \notag\\ &&-\varepsilon\delta_{1}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert dq\Big)\Vert u\Vert_{2}^{2}-\frac{\varepsilon}{4\delta_{1}}\int_{\Omega}\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert y^{2}(x, 1, q, t)dqdx+\frac{\varepsilon}{2}(\varpi o\nabla u). \label{sys2.21} \end{eqnarray}
(29)
Therefore, using (23) and by setting \(\delta_{1}\) so that, \(\dfrac{1}{4\delta_{1}c_{1}}=\kappa \), substituting in (29), we get
\begin{eqnarray} \mathcal{K}'(t)&\geq &[1-\varepsilon\kappa]H'(t)+\varepsilon\Vert u_{t}\Vert_{2}^{2}-\varepsilon\Big[\Big(1-\frac{1}{2}\int_{0}^{t}\varpi(q)dq\Big)\Big]\Vert \nabla u\Vert_{2}^{2}+\varepsilon b\Vert u\Vert^{p}_{p} \notag\\ && -\frac{\varepsilon}{4c_{1}\kappa}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert dq\Big)\Vert u\Vert_{2}^{2}+\frac{\varepsilon}{2}(\varpi o\nabla u). \label{sys2.22} \end{eqnarray}
(30)
For \(0< a< 1\), from (21)
\begin{eqnarray} \varepsilon b\Vert u\Vert^{p}_{p} &=&\varepsilon p(1-a)H(t)+\frac{\varepsilon p(1-a)}{2}\Vert u_{t}\Vert_{2}^{2}+\varepsilon ba\Vert u\Vert_{p}^{p}\notag\\ &&+\frac{\varepsilon p(1-a)}{2}\Big(1-\int_{0}^{t}\varpi(q)dq\Big)\Vert \nabla u\Vert_{2}^{2}+\frac{\varepsilon}{2}p(1-a)(\varpi o\nabla u)\notag\\ &&+\frac{\varepsilon p(1-a)}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx.\label{sys2.23} \end{eqnarray}
(31)
Substituting in (30), we get
\begin{eqnarray} \mathcal{K}'(t)&\geq &[1-\varepsilon\kappa]H'(t)+\varepsilon\Big[\frac{p(1-a)}{2}+1\Big]\Vert u_{t}\Vert_{2}^{2}\notag\\ &&+\varepsilon\Big[\Big(\frac{p(1-a)}{2}\Big)\Big(1-\int_{0}^{t}\varpi(q)dq\Big)-\Big(1-\frac{1}{2}\int_{0}^{t}\varpi(q)dq \Big)\Big]\Vert \nabla u\Vert_{2}^{2} \notag\\ &&-\frac{\varepsilon}{4c_{1}\kappa}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert dq\Big)\Vert u\Vert_{2}^{2}+\varepsilon p(1-a)H(t)+\varepsilon ba\Vert u\Vert_{p}^{p}\notag\\ &&+\frac{\varepsilon p(1-a)}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx+\frac{\varepsilon}{2}(p(1-a)+1)(\varpi o\nabla u). \label{sys2.24} \end{eqnarray}
(32)
Using Poincare's inequality, we obtain
\begin{eqnarray} \mathcal{K}'(t)&\geq &[1-\varepsilon\kappa]H'(t)+\varepsilon\Big[\frac{p(1-a)}{2}+1\Big]\Vert u_{t}\Vert_{2}^{2}+\frac{\varepsilon}{2}(p(1-a)+1)(\varpi o\nabla u)\notag\\ &&+\varepsilon\Big\{\Big(\frac{p(1-a)}{2}-1\Big)-\int_{0}^{t}\varpi(q)dq(\frac{p(1-a)-1}{2})\notag\\ &&- \frac{c}{4c_{1}\kappa}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert dq\Big)\Big\}\Vert \nabla u\Vert_{2}^{2} +\varepsilon ab\Vert u\Vert^{p}_{p}+\varepsilon p(1-a)H(t)\notag\\ &&+\frac{\varepsilon p(1-a)}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx. \label{sys2.27} \end{eqnarray}
(33)
At this point, we choose \(a>0\) so small that \begin{equation*} \alpha_{1}=\frac{p(1-a)}{2}-1>0, \end{equation*} and assume
\begin{equation} \int_{0}^{\infty}\varpi(q)dq< \dfrac{\dfrac{p(1-a)}{2}-1}{\Big(\dfrac{p(1-a)}{2}-\dfrac{1}{2}\Big)}=\dfrac{2\alpha_{1}}{2\alpha_{1}+1},\label{c1.1} \end{equation}
(34)
then we choose \(\kappa\) so large that \begin{equation*} \alpha_{2}=\Big(\frac{p(1-a)}{2}-1\Big)-\int_{0}^{t}\varpi(q)dq\Big(\frac{p(1-a)-1}{2}\Big)- \frac{c}{4c_{1}\kappa}\Big(\int_{\tau_{1}}^{\tau_{2}}\vert \mu_{2}(q)\vert dq\Big)>0. \end{equation*} Once \(\kappa\) and \(a\) are fixed, we pick \(\varepsilon\) so small enough so that \begin{equation*} \alpha_{4}=1-\varepsilon\kappa>0, \end{equation*} and
\begin{equation} \mathcal{K}(t)\leq \frac{b}{p}\Vert u \Vert^p_p. \label{m1.22} \end{equation}
(35)
Thus, for some \(\beta>0\), estimate (33) becomes
\begin{eqnarray} \mathcal{K}'(t)&\geq &\beta\Big\{H(t)+\Vert u_{t}\Vert_{2}^{2} +\Vert \nabla u\Vert_{2}^{2} +(\varpi o\nabla u)+\Vert u\Vert_{p}^{p}+\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx\Big\}, \label{sys2.28} \end{eqnarray}
(36)
and
\begin{equation} \mathcal{K}(t)\geq\mathcal{K}(0)>0, \ t>0.\label{sys2.29} \end{equation}
(37)
Next, using Young's and Poincare's inequalities, from (25) we have
\begin{eqnarray} \mathcal{K}(t)&=&\Big(H+\varepsilon \int_{\Omega}uu_{t}dx+\frac{\varepsilon\mu_{1}}{2}\int_{\Omega}u^{2}dx+\frac{\varepsilon\omega}{2}\int_{\Omega}\nabla u^{2}dx\Big)\notag\\ &\leq&c[H(t)+\vert\int_{\Omega}uu_{t}dx\vert+\Vert u\Vert_{2}^{2}+\Vert \nabla u\Vert_{2}^{2}]\leq c[H(t)+\Vert \nabla u\Vert_{2}^{2}+\Vert u_{t}\Vert_{2}^{2}]. \label{sys2.33} \end{eqnarray}
(38)
For some \(c>0\). Since, \(H(t) > 0\), we have from (12)
\begin{eqnarray} &&-\frac{1}{2}\Vert u_{t}\Vert_{2}^{2}-\frac{1}{2}\Big(1-\int_{0}^{t}\varpi(q)dq\Big)\Vert \nabla u\Vert_{2}^{2}-\frac{1}{2}(\varpi o\nabla u)\label{sys2.1.0}\\ &&-\frac{1}{2}\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx+\frac{b}{p}\Vert u\Vert_{p}^{p}>0,\notag \end{eqnarray}
(39)
then \begin{eqnarray} \frac{1}{2}\Big(1-\int_{0}^{t}\varpi(q)dq\Big)\Vert \nabla u\Vert_{2}^{2}&<&\frac{b}{p}\Vert u\Vert_{p}^{p}< \frac{b}{p}\Vert u\Vert_{p}^{p}+(\varpi o\nabla u)\label{sys2.1.1}+\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx.\notag \end{eqnarray} In the other hand, using (7), to get
\begin{eqnarray} \frac{1}{2}(1-l)\Vert \nabla u\Vert_{2}^{2}&<&\frac{b}{p}\Vert u\Vert_{p}^{p}< \frac{b}{p}\Vert u\Vert_{p}^{p}+(\varpi o\nabla u)+\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx.\label{sys2.1.2} \end{eqnarray}
(40)
Consequently,
\begin{eqnarray} \Vert \nabla u\Vert_{2}^{2}&<&\frac{2b}{p}\Vert u\Vert_{p}^{p}+2(\varpi o\nabla u)+l\Vert \nabla u\Vert_{2}^{2}+2\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx.\label{sys2.1.3} \end{eqnarray}
(41)
Inserting (41) into (38), to see that there exists a positive constant \(k_{1}\) such that
\begin{eqnarray} \mathcal{K}(t)&\leq & k_{1}[H(t)+\Vert \nabla u\Vert_{2}^{2}+\Vert u_{t}\Vert_{2}^{2}+\frac{b}{p}\Vert u\Vert_{p}^{p}+(\varpi o\nabla u)(t)\notag\\ &&+\int_{\Omega}\int_{0}^{1}\int_{\tau_{1}}^{\tau_{2}}q\vert \mu_{2}(q)\vert y^{2}(x, \rho, q, t)dqd\rho dx], \forall t>0. \label{sys2.1.4} \end{eqnarray}
(42)
From inequalities (36) and (42) we obtain the differential inequality
\begin{equation} \mathcal{K}'(t)\geq \lambda \mathcal{K}(t), \label{sys2.34} \end{equation}
(43)
where \(\lambda>0\), depending only on \(\beta\) and \(k_{1}\). A simple integration of (43), we obtain
\begin{equation} \mathcal{K}(t)\geq \mathcal{K}(0)e^{(\lambda t)}, \forall t>0.\label{m2.0} \end{equation}
(44)
From (24) and (35), we have
\begin{equation} \mathcal{K}(t)\leq H(t)\leq\frac{b}{p}\Vert u\Vert_{p}^{p}.\label{m2.1} \end{equation}
(45)
By (44) and (45), we have \begin{equation*} \Vert u\Vert_{p}^{p}\geq C e^{(\lambda t)}, \forall t>0. \end{equation*} Therefore, we conclude that the solution in the \(L_{p}\)-norm growths exponentially. This completes the proof.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Bouhali, K., & Ellaggoune, F. (2018). Existence and decay of solution to coupled system of viscoelastic wave equations with strong damping in \(\mathbb{R}^n\). Boletim da Sociedade Paranaense de Matematica, doi:10.5269/bspm.41175.[Google Scholor]
  2. Cavalcanti, M. M., Domingos Cavalcanti, V. N., & Ferreira, J. (2001). Existence and uniform decay for a non-linear viscoelastic equation with strong damping. Mathematical Methods in Applied Sciences, 24(14), 1043-1053.[Google Scholor]
  3. Piskin, E. (2015). Growth of solutions with positive initial energy to systems of nonlinear wave equations with damping and source terms. Advances in Mathematical Physics, 2015.[Google Scholor]
  4. Piskin, E., & Ekinci, F. (2019). Blow up, exponential growth of solution for a reaction-diffusion equation with multiple nonlinearities. Tbilisi Mathematical Journal, 12(4), 61-70.[Google Scholor]
  5. Cavalcanti, M. M., Cavalcanti, V. D., Prates Filho, J. S., & Soriano, J. A. (2001). Existence and uniform decay rates for viscoelastic problems with nonlinear boundary damping. Differential and Integral Equations, 14(1), 85-116.[Google Scholor]
  6. Song, H., & Xue, D. (2014). Blow up in a nonlinear viscoelastic wave equation with strong damping. Nonlinear Analysis: Theory, Methods & Applications, 109, 245-251.[Google Scholor]
  7. Song, H., & Zhong, C. (2010). Blow-up of solutions of a nonlinear viscoelastic wave equation. Nonlinear Analysis: Real World Applications, 11(5), 3877-3883.[Google Scholor]
  8. Zennir, K. (2013). Exponential growth of solutions with \(L_{p}\)- norm of a nonlinear viscoelastic hyperbolic equation. Journal of Nonlinear Sciences & Applications (JNSA), 6(4), 252-262.[Google Scholor]
  9. Guo, L., Yuan, Z., & Lin, G. (2015). Blow up and global existence for a nonlinear viscoelastic wave equation with strong damping and nonlinear damping and source terms. Applied Mathematics, 6(5), 806-816.[Google Scholor]
  10. Braik, A., Miloudi, Y., & Zennir, K. (2018). A finite-time blow-up result for a class of solutions with positive initial energy for coupled system of heat equations with memories. Mathematical Methods in the Applied Sciences, 41(4), 1674-1682.[Google Scholor]
  11. Benaissa, A., Ouchenane, D., & Zennir, K. (2012). Blow up of positive initial-energy solutions to systems of nonlinear wave equations with degenerate damping and source terms. Nonlinear Studies, 19(4), 523-535.[Google Scholor]
  12. Ouchenane, D., Zennir, K., & Bayoud, M. (2013). Global nonexistence of solutions for a system of nonlinear viscoelastic wave equations with degenerate damping and source terms. Ukrainian Mathematical Journal, 65(5), 723-739.[Google Scholor]
  13. Zennir, K. (2014). Growth of solutions with positive initial energy to system of degeneratly damped wave equations with memory. Lobachevskii Journal of Mathematics, 35(2), 147-156.[Google Scholor]
  14. Nicaise, S., & Pignotti, C. (2008). Stabilization of the wave equation with boundary or internal distributed delay. Differential and Integral Equations, 21(9-10), 935-958.[Google Scholor]
  15. Georgiev, V., & Todorova, G. (1994). Existence of a solution of the wave equation with nonlinear damping and source terms. Journal of Differential Equations, 109(2), 295-308.[Google Scholor]
  16. Wu, S. T., & Tsai, L. Y. (2006). On global existence and blow-up of solutions for an integro-differential equation with strong damping. Taiwanese Journal of Mathematics, 10(4), 979-1014.[Google Scholor]
]]>
Mathematical model for measles disease with control on the susceptible and exposed compartments https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/mathematical-model-for-measles-disease-with-control-on-the-susceptible-and-exposed-compartments/ Tue, 28 Apr 2020 14:53:25 +0000 https://old.pisrt.org/?p=4092
OMA-Vol. 4 (2020), Issue 1, pp. 60 - 75 Open Access Full-Text PDF
Samuel O. Sowole, Abdullahi Ibrahim, Daouda Sangare, Ahmed O. Lukman
Abstract: In this paper, we develop a mathematical deterministic modeling approach to model measles disease by using the data pertinent to Nigeria. Control measure was introduced into the susceptible and exposed classes to study the prevalence and control of the measles disease. We established the existence and uniqueness of the solution to the model. From the simulation results, it was realized that the control introduced on the susceptible class; and exposed individuals at latent period play a significant role in controlling the disease. Furthermore, it is recognized that if more people in the susceptible class get immunization and the exposed people at latent period goes for treatment and therapy during this state before they become infective, the disease will be eradicated more quickly with time.
]]>

Open Journal of Mathematical Analysis

Mathematical model for measles disease with control on the susceptible and exposed compartments

Samuel O. Sowole\(^1\), Abdullahi Ibrahim, Daouda Sangare, Ahmed O. Lukman
Department of Mathematical Sciences, African Institute for Mathematical Sciences, Senegal.; (S.O.S)
Department of Mathematical Sciences, Baze University, Nigeria.; (A.I & A.O.L)
Department of Mathematical Sciences, Universite Gaston Berger, Senegal.; (D.S)
\(^1\)Corresponding Author: oladimeji.s.sowole@aims-senegal.org

Abstract

In this paper, we develop a mathematical deterministic modeling approach to model measles disease by using the data pertinent to Nigeria. Control measure was introduced into the susceptible and exposed classes to study the prevalence and control of the measles disease. We established the existence and uniqueness of the solution to the model. From the simulation results, it was realized that the control introduced on the susceptible class; and exposed individuals at latent period play a significant role in controlling the disease. Furthermore, it is recognized that if more people in the susceptible class get immunization and the exposed people at latent period goes for treatment and therapy during this state before they become infective, the disease will be eradicated more quickly with time.

Keywords:

Measles disease, mathematical model, SEIR model, control, existence and uniqueness of solution, stability analysis, basic reproduction number, Runge-Kutta, numerical simulations.

1. Introduction

Measles disease has been recognized as one of the world's most contagious diseases, which has the potential to be extremely severe. The transmission mode of measles disease is by person-to-person through the air by infectious droplets which has over \(90\%\) attack rates among its susceptible persons [1]. Measles disease is caused by the measles virus, an infectious agent belonging to the virus family called Paramyxoviridae that causes an infection of the respiratory system with a common symptom of  red rash on the infected person's skin [2]. Infectious diseases have been a serious concern for both human and animals populations globally. Control and prevention measures are therefore important tasks both from the human survival and economic point of views because public health issue is a major concern to the world. For effective intervention measures, a complete understanding of disease transmission like measles, and how burden they can be to a human, is therefore necessary. Even though the disease is possible and entirely preventable by taking two doses of a safe and effective vaccine, world-wide vaccination coverage with the first dose of measles vaccine still stands at \(85\%\) presently. This is still not close to the \(95\%\) proposed by World Health Organization (WHO) that will be needed to prevent outbreaks. Consequently, many people in some communities are still at risk of contracting the disease. According to WHO, the second dose of vaccination coverage, though increasing, still stands at \(67\%\) currently. But to be in a higher level of safety of not contracting the disease, the first and second doses of the vaccine must be taken.

Several factors are responsible for people not to be vaccinated against the disease. The wrong perceptions about immunization have caused some not to go for vaccination and release their wards for immunization [3]. Lack of access to health care facilities is another major factor, particularly among the rural people, and can cause them to mix out on vaccination programmes being organized by their governments.

Researchers have attributed the main causes of transmission to inappropriate education about the measles disease and low detection rate at early stages. Financial constrains is also a major factor as some individual will opt for treatment of the disease locally or traditionally than going to hospitals for professional treatments. Though there are campaigns coupled with regular and targeted measles vaccination coverage across the 36 states and Federal Capital Territory (FCT) in Nigeria, measles disease is still very prevalence with evidence of sickness and death from measles outbreaks. There is a measles outbreak in Nigeria during the first quarter of 2019. With almost six thousands cases of measles which nearly doubled the cases reported in 2018, and 15 deaths recorded as at March 2019 [4]. Evidently, measles disease is one of the infectious diseases invading Nigeria, so also some other developing countries in Africa.

Mathematical modelling of infectious diseases has proven to be powerful and useful tools. They are good in proposing and testing theories, and in comparing, planning, implementing and evaluating various detection, prevention, therapy and control intervention programs for infectious diseases. Right from the beginning of \(20^{th}\) century, researchers have been using epidemiological models to modelled infectious diseases [5]. Examples of such models are seen in [6, 7, 8]. Only a few of these models described by [5] focused on modelling childhood epidemics as measles. Roberts [9] carried out a study on predicting and preventing measles disease epidemics in New Zealand. In the work, they used a compartmental SIR model to model the dynamics of measles disease under varied immunization strategies in a population taking into consideration size and age structure. Momoh [10] developed a mathematical epidemiological model for control of measles disease. They used compartmental SEIR model for varying population size which best describes a population dynamics of developing countries like Africa. From previous literature, it has been ascertained that vaccination protects susceptibles individuals against infectious diseases such as measles by producing herd (crowd) immunity. Examples of such works can be found in [8, 11, 12, 13, 14] for instances. Sowole et al., [15], modelled measles disease using SEIR model. Taking Senegal as a case study, they looked at the effect a control measure will have on the exposed class over the entire population dynamics. Their model realized that if more people at the latent period go for treatment and therapy before they can be able to transmit the disease, the disease will be eradicated within a short period of time.

Our goal is to model measles disease transmission in Nigeria and come up with the control measures in reduction (and by extension, elimination) of the transmission of the disease in the country with control in the Susceptibles and Exposed compartments. Finding a threshold condition that will determine whether an infectious disease in a population will continue to spread or will die out with time in a given population is one of the fundamental questions of epidemiological modelling. So, we derived a fundamental epidemiological quantity, \(R_0\), called the basic reproductive number, which is the threshold parameter. This work uses a compartmental Susceptible-Exposed-Infective-Recovered epidemic model to formulate mathematical measles disease model and its corresponding mathematical analysis and numerical simulations were well presented.

The rest of the paper is structured as follow: the model is described and formulated in Section 2, model simulation is given in Section 3 and the conclusions are given in Section 4.

2. Model description and formulation

The model divides the total human population at any given time into four sub-populations to explains the transmission dynamics of the measles disease in a given human population at a given time.

The total human population \(N(t)\) is divided into sub-populations of Susceptible class\(S(t)\), Exposed compartment \(E (t)\), Infective compartment \(I(t)\) and Recovered class \(R(t)\). So we have:

\begin{eqnarray}\label{eq1} N(t) = S(t) + E(t) + I(t) + R(t). \end{eqnarray}
(1)
The population under our consideration is homogeneous and interacting which reflect the demography of a typical developing countries [16]. This model fit well as it presents experiments of an exponentially increasing dynamics of the measles disease.

Figure 1.  Flow Diagram of Measles Disease in a Deterministic Population.

Figure 1, shows flow diagram of measles disease in a deterministic population. Where the model variables and their meaning are presented in Table 1.
Table 1. State variables used and their meanings.
Variable Meaning
\(S(t)\) The number of susceptible individuals at  a given  time, t
\(E(t)\) The number of exposed individuals at a given time, t
\(I(t)\) The number of infective individuals at a given time, t
\(R(t)\) The number of recovered individuals at a given time, t

Susceptible compartment includes individuals who are at risk of developing measles infection if they had contact with infected individuals. Exposed class consists of individuals that had the infection but are not showing the symptoms of the measles disease and could not transmit the disease to others. Infective compartment consists of individuals that are showing the symptoms of the disease and can infect others.

The recovered human compartment comprises of individuals who have recovered from the disease and had permanent immunity.

The susceptible class \(S\) is increasing by recruitment (birth and/or immigration rate) which we denoted by \(b\). It is decreasing by susceptible class who have been immunized by the rate \(v\), is decreasing by infection if there is a contact with infected individuals at a rate of \(\beta\), and is diminishing by leaving (normal death and emigration) rate which we denoted by \(\mu\) so that:

\begin{eqnarray}\label{succeptible_class} \frac{dS}{dt} = b - \beta SI - ( v + \mu) S \end{eqnarray}
(2)
For the second class, Exposed \(E\), the individuals here is formed by direct contact with infected individuals at a rate of \(\beta\). We define parameters and their meaning in Table 2 below.
Table 2. State parameters and their meaning.
Parameter Meaning
\(b\) Recruitment rate ( by birth and/or immigrants)
\(v\) vaccination rate for the susceptible class who later got vaccinated
\(\mu\) Leaving rate (by death  and/or emigrants)
\(\beta\) The contact rate
\(\gamma\) The rate at which an infective individuals recovered per unit time
\(\sigma\) The rate of exposed individuals who have undergone testing and therapy
\(\alpha\) The rate at which an exposed become infective

The class \(E\) is decreasing by individuals who have undergone testing and measles therapy at a rate of \(\sigma\), the individuals who progresses into infected class at a rate of \(\alpha\) and also this class is diminishing by leaving rate of \(\mu\) so that:

\begin{eqnarray}\label{exposed_class} \frac{dE}{dt} = \beta SI - (\mu + \alpha + \sigma)E \end{eqnarray}
(3)
The third class \(I\), of infective individuals is formed by individuals who progresses from exposed class to this class at a rate \(\alpha\). This class is decreasing by individuals who are recovering from infection at a rate of \(\gamma\) and is diminishing by leaving rate of \(\mu\). We then have that:
\begin{eqnarray}\label{infected} \frac{dI}{dt} = \alpha E - (\mu + \gamma)I \end{eqnarray}
(4)
From the SEIR model, it is assumes that Susceptible-immunized individuals, Exposed-Recovered individuals and Infected-Recovered individuals become immune to the measles disease permanently, i.e. you can only be infected once with the disease. We use this assumption to generates the fourth class \(R\) of individuals who have complete protection against the disease. This class \(R\) of recovered individuals is diminishing by leaving rate of \(\mu\). So that:
\begin{eqnarray}\label{recovered} \frac{dR}{dt} = v S + \gamma I + \sigma E - \mu R \end{eqnarray}
(5)
This SEIR model is now represented by the following system of first order differential equations and shows the transitions between the four compartments of the model:
\begin{equation}\label{model1} \begin{cases} \frac{dS}{dt} = b - \beta SI - (v + \mu) S,\\ \frac{dE}{dt} = \beta SI - (\mu + \alpha + \sigma)E\\ \frac{dI}{dt} = \alpha E - (\mu + \gamma)I\\ \frac{dR}{dt} = v S + \gamma I + \sigma E - \mu R \end{cases} \end{equation}
(6)

2.1. Model assumptions

The following are the assumptions for the compartmental SEIR model for measles disease which we modeled in this work:
  • (i) The recruitment rate (which is through newborns and/or migrants) are assumed to be susceptible to the disease.
  • (ii) We assumed that every person in the population under consideration is susceptible to the measles disease.
  • (iii) Every Individual is equally likely to be infected by the infectious individual (s) in a case of contact except for those who are immune against the measles disease.
  • (iv) Infectious individuals are detected early and isolated for immediate treatment and education
  • (v) The population is homogeneously mixed. By homogeneously mixed we mean a population that interacts among themselves in such a uniformly manner.
  • (vi) The population is a varying population where recruitment rate and leaving rate are differ within a given time steps.
  • (vii) There is no treatment failure, a patient will either recover or die.
  • (viii) Recovered individuals are permanently immune against the disease.

2.2. Properties of the model

The basic properties of our model are that of the properties of ``feasible solution'' and ``positivity of the solution''. The feasible solution of the model equations shows the region in which the solution of the equations are biologically significant and the positivity of the solutions tells the non-negativity of the solutions of the model equations.

2.3. Feasible solution

Here, the deterministic SEIR model is used to model infectious disease in a human population. It will be reasonable to assumed that the parameters used and variables in all classes are non negative, that is \(t \geq 0\). The show that all variables of the model are non-negative for all given non-negative initial conditions are provided below. The feasible solution region which is positively invariant set of the model is given by:
\begin{equation} \Omega = \bigg\{(S, E, I, R) \in \mathbb{R}_{+}^{4} | N(t) = S(t) + E(t) + I(t)+ R(t) \rightarrow \frac{b}{\mu} \bigg\}. \end{equation}
(7)
The following lemma established this claim.

Lemma 1. The set \(\Omega\) is positively invariant and attracts all solution in \(\mathbb{R}_{+}^{4}\).

Proof. Since \(N(t) = S(t) + E(t) + I(t) + R(t)\). Adding Equations (2) to (5) together gives us the rate of change of the total population: \begin{align*} \frac{dN}{dt} &= \frac{dS}{dt} + \frac{dE}{dt} + \frac{dI}{dt} + \frac{dR}{dt} \\ &= b - \mu(S+E+I+R)\\ &= b - \mu N \hspace{1cm} (since \hspace{0.1cm} N \hspace{0.1cm} = S+E+I+R).\end{align*} This is a first order linear differential equation and a first order linear differential equation of the form

\begin{equation}\label{2.7} \frac{dN}{dt}+ \mu N = b \hspace{1cm} (After \hspace{0.2cm} re-arranging) \end{equation}
(8)
can be solved by introducing integrating factor. Here, \(\mu\) and \(b\) are both constants, so the integrating factor \(I.F\) is given as: \(I.F= \exp \bigg(\int \mu dt \bigg).\) Now multiplying both sides of Equation (8) with \(\exp(\int \mu dt)\) given,
\begin{equation}\label{2.77} \exp \bigg(\int \mu dt \bigg) \bigg( \frac{dN}{dt} + \mu N \bigg) = b \exp \bigg(\int \mu dt \bigg). \end{equation}
(9)
The left hand side of Equation (9) is \( \frac{d}{dt} \bigg[ N(t). \exp \bigg(\int\mu dt \bigg)\bigg]\), therefore \[ \frac{d}{dt} \bigg[ N(t). \exp \bigg (\int\mu dt \bigg) \bigg] = b . \exp \bigg( \int \mu dt \bigg) \] Integrating both sides, we have \[ N(t). \exp (\mu dt) = \frac{b}{\mu} \exp (\mu dt) + K,\] where \(K\) is a constant. So that \[N(t) = \frac{b}{\mu} + K\exp(-ut) . \] When \(t=0\), we have that \[ N(0) = \frac{b}{\mu} + K. \] Therefore, \[ K = N(0) - \frac{b}{\mu}\,. \] Substituting the value of \(K\), the solution (with simplification) of this linear differential equation will becomes: \[ N(t) = N(0)\exp^{-\mu t}+ \frac{b}{\mu} (1- \exp(-\mu t)). \] Taking the limit as \(t \rightarrow \infty\), we have \[ N(t) \leq \frac{b}{\mu}.\] Therefore, we have established that \(\Omega\) is positively invariant and attracts all solution in \(\mathbb{R}_{+}^{4}\).

2.3.1 Positivity of results
In this section we prove that all variables in the SEIR model Equations (2) to (5) are non-negative.

Lemma 2. Let the initial data set be \((S, E, I, R) (0) \geq 0 \in \Omega\), then the solution set \((S(t), E(t), I(t), R(t) ) \) of the equations (2) to (5) is positive \(\forall\]t > 0\).

Proof. From Equation (2) if we assumed that: \[\frac{dS}{dt} = b - \beta SI - (v + \mu) S \geq -(\beta SI + v + \mu S ).\] That is \[\frac{dS}{dt} \geq -(\beta I + v + \mu )S \hspace{0.5cm} or \hspace{0.5cm} \frac{dS}{S} \geq -(\beta I + v + \mu )dt \hspace{1cm} (by \hspace{0.1cm} separation \hspace{0.1cm} of \hspace{0.1cm} variables, \hspace{0.1cm} since \hspace{0.1cm} S \ne 0) .\] Integrating both sides of the inequalities, we have \[In \big( S(t) \big) \geq -(\beta I + v + \mu )t + K.\] So that \[S(t) \geq K \exp \bigg( -(\beta I + v + \mu )t\bigg). \] At \(t = 0\); this becomes: \[S(t) \geq S(0) \exp \bigg( -(\beta I + v + \mu )0\bigg) \geq 0 \hspace{1cm} (since \hspace{0.1cm} (\beta I + v + \mu ) > 0).\] That is \[ S(t) > 0.\] Similarly from Equation (3), we have \[\frac{dE}{dt} = \beta SI - (\mu + \alpha + \sigma)E \geq - (\mu + \alpha + \sigma)E \] i.e., \[\frac{dE}{dt} \geq - (\mu + \alpha + \sigma)E \] or \[\frac{dE}{E} \geq - (\mu + \alpha + \sigma)dt \hspace{1cm} (by \hspace{0.1cm} separation \hspace{0.1cm} of \hspace{0.1cm} variables, \hspace{0.1cm} since \hspace{0.1cm} E \ne 0) .\] Integrating both sides of the inequalities we have; \[ E(t) \geq K \exp \bigg(- (\mu + \alpha + \sigma)t \bigg).\] At \(t = 0\), we have that \[E(t) \geq E(0) \exp \bigg(- (\mu + \alpha + \sigma\bigg)0 \geq 0, \hspace{1cm} (since \hspace{0.1cm} (\mu + \alpha + \sigma) > 0.\] That is \( E(t) > 0.\) Also from Equation (4), we have \[\frac{dI}{dt} = \alpha E - (\mu + \gamma)I \geq - (\mu + \gamma)I.\] That is \[\frac{dI}{dt} \geq - (\mu + \gamma)I.\] Then we have that: \[\frac{dI}{I} \geq - (\mu + \gamma)dt.\] On integrating we obtains \[ In \big( I(t) \big) \geq - (\mu + \gamma)t + K,\] that is \[I(t) \geq K \exp \bigg(- (\mu + \gamma)t \bigg).\] At \(t=0\), we have \(I(t) \geq I(t) \exp \bigg(- (\mu + \gamma)0 \bigg).\) So that \( I(t) > 0.\) Finally from Equation (5), we have \[\frac{dR}{dt} = v S + \gamma I + \sigma E - \mu R > \sigma E - \mu R.\] That is, \[\frac{dR}{dt} > \sigma E - \mu R.\] Which has an integrating factor \( I.F = \exp (-\mu t).\) Then we have: \[\frac{dR}{dt}.\exp (-\mu t) > \sigma E. \exp (-\mu t) - \mu R . \exp (-\mu t).\] Integrating at constant \(K\), we have \[ R(t) > \frac{\sigma E}{\mu} + K \exp (-\mu t).\] When \(t=0\) we obtained that \(R(0) >\frac{\sigma E}{\mu} + K.\) The solution then becomes \[R(t) > R(0)\exp (-\mu t) +\frac{\sigma E}{\mu} (1- \exp (-\mu t)).\] That is \(R(t) > 0.\) Hence, we have proved that all variables are positive \(\forall t > 0\).

2.4. Existence and uniqueness of solution for the SEIR model

The general first-order ODE is in the form:
\begin{equation}\label{ode} x' = f (t, x), \hspace{0.6cm} x(t_0 ) = x_0 %\hspace{1.6cm} (i) \end{equation}
(10)
One will be interested in asking the following questions:
  • (1) Under what conditions, the solution to Equation (10) exists?
  • (2) Under what conditions, there is a unique solution to Equation (10)?
To answers these, let \begin{equation*} \begin{cases} f_1 = b - \beta SI - \mu S,\\ f_2 = \beta SI - (\mu + \alpha + \sigma)E,\\ f_3= \alpha E - (\mu + \gamma)I,\\ f_4 = \gamma I + \sigma E - \mu R. \end{cases} \end{equation*} We use the following theorem to established the existence and uniqueness of solution for our SEIR model.

Theorem 1.[Uniqueness of Solution] Suppose \(D\) denotes the domain, and

\begin{equation}\label{1st} |t - t_0 | \leq a, ||x - x_0|| \leq b, \;\text{where}\; x = (x_1 , x_2 , ..., x_n ), x_0 = (x_{10}, x_{20} , ..., x_{n0} ) \end{equation}
(11)
and suppose that \(f (t, x)\) satisfies the Lipschitz condition:
\begin{equation}\label{2nd} ||f (t, x_1 ) - f (t, x_2 )|| \leq k|| x_1- x_2 || , \end{equation}
(12)
then whenever the pairs \((t, x_1)\) and \((t, x_2 )\) belong to the domain \(D\), where \(k\) is used to represent a positive constant, there exist a constant \(\delta > 0\) such that there exists a unique (exactly one) continuous vector solution \(x(t)\) of the system (10) in the interval \(|t - t_0 | \leq \delta\).

It is important to note that condition (12) is satisfied by requirement that \( \bigg \{ \frac {\partial{f_i}}{\partial{x_j}},\hspace{0.2cm} _{ i, j = 1, 2, ..., n},\) is continuous and bounded in the domain \(D\).

Lemma 3. If \(f (t, x)\) has continuous partial derivative \(\frac{\partial{f_i}}{\partial{x_j}}\) on a bounded closed convex domain \(\mathbb{R}\) (i.e, convex set of real numbers), where \(\mathbb{R}\) is used to denotes real numbers, then it satisfies a Lipschitz condition in \(\mathbb{R}\).

Our interest is in the domain
\begin{equation}\label{3rd} 1 \leq \epsilon \leq \mathbb{R} \end{equation}
(13)
so, we look for a bounded solution of the form \(0 < \mathcal{R} < \infty.\) We now prove the following existence theorem.

Theorem [Existence of solution] Let \(D\) denote the domain defined in (11) such that (12) and (13) hold. Then, there exist a solution of model system of Equations (2)-(5) which is bounded in the domain \(D\).

Proof. Let

\begin{equation} f_1 = b - \beta SI - (v + \mu ) S, \end{equation}
(14)
\begin{equation} f_2 = \beta SI - (\mu + \alpha + \sigma)E, \end{equation}
(15)
\begin{equation} f_3= \alpha E - (\mu + \gamma)I, \end{equation}
(16)
\begin{equation} f_4 = v S + \gamma I + \sigma E - \mu R \end{equation}
(17)
We shows that \(\frac{\partial{f_i}}{\partial{x_j}},\hspace{0.2cm} i,j=1,2,3,4\) are continuous and bounded. We explored the following partial derivatives for all the model equations. From Equation (14); \[ \bigg|\frac{\partial f_1}{\partial S}\bigg| = \bigg|-\beta I - v - \mu \bigg| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_1}{\partial E}\bigg| = |0| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_1}{\partial I}\bigg| = \bigg|-\beta S \bigg| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_1}{\partial R}\bigg| = |0| < \infty.\] Similarly, from Equation (15): \[ \bigg|\frac{\partial f_2}{\partial S}\bigg| = \bigg|\beta I \bigg| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_2}{\partial E}\bigg| = \bigg|-(\mu + \alpha + \sigma) \bigg| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_2}{\partial I}\bigg| = \bigg|\beta S \bigg| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_2}{\partial R}\bigg| = |0| < \infty.\] Also from Equation (16); \[ \bigg|\frac{\partial f_3}{\partial S}\bigg| = |0| < \infty, \hspace{0.3cm} \bigg|\frac{\partial f_3}{\partial E}\bigg| = |\alpha| < \infty,\hspace{0.3cm} \bigg|\frac{\partial f_3}{\partial I}\bigg| = |- (\mu + \gamma)| < \infty,\hspace{0.3cm} \bigg|\frac{\partial f_3}{\partial R}\bigg| = |0| < \infty.\] Finally, from Equation (17): \[ \bigg|\frac{\partial f_4}{\partial S}\bigg| = |v| < \infty,\hspace{0.3cm} \bigg|\frac{\partial f_4}{\partial E}\bigg| = |\alpha| < \infty,\hspace{0.3cm} \bigg|\frac{\partial f_4}{\partial I}\bigg| = |\gamma| < \infty,\hspace{0.3cm} \bigg|\frac{\partial f_4}{\partial R}\bigg| = |-\mu| < \infty.\] We have clearly established that all these partial derivatives are continuous and bounded, hence, by Theorem \ref{thm1}, we can say that there exist a unique solution of (2) to (5) in the region \(D\).

2.5. Existence of steady states of the system

Under this section we will find the equilibrium points and established asymptotic stability of the SEIR model. In order to obtained the equilibrium points of the system, we equate the system of equations of the model to zeros; i.e.;
\begin{equation} \frac{dS}{dt} = \frac{dE}{dt} = \frac{dI}{dt} = \frac{dR}{dt} = 0 \end{equation}
(18)

2.6. Stability of the SEIR model (local asymptotic stability)

To find the local stability of the model, we first find the equilibrium points (disease free equilibrium) of the system (2) to (5). By equating the system to zeros we have
\begin{equation} b - \beta SI - (v + \mu) S = 0, \end{equation}
(19)
\begin{equation} \beta SI - (\mu + \alpha + \sigma)E = 0, \end{equation}
(20)
\begin{equation} \alpha E - (\mu + \gamma)I = 0, \end{equation}
(21)
\begin{equation} v S + \gamma I + \sigma E - \mu R = 0. \end{equation}
(22)
From Equation (19), we have \( b = \beta SI + (v + \mu) S \equiv (\beta I + v + \mu) S\) which implies that \( S = \frac{b}{\beta I + v + \mu}\) but at the initial state, \(v =0\) and \(\beta = 0,\) which implies \( S = \frac{b}{\mu}. \) Now from Equation (20), we have \( \beta SI = (\mu + \alpha + \sigma)E,\) which implies that \( E = \frac{\beta SI}{(\mu + \alpha + \sigma)},\) hence \( E = 0\) (since \(\beta = 0\)). Similarly from Equation (21), we have \( \alpha E = (\mu + \gamma)I,\) which implies that \( I = \frac{\alpha E}{(\mu + \gamma)},\) hence \( I = 0\) (since \(E = 0 \)). Finally from Equation (22), we have \( v S + \gamma I + \sigma E = \mu R,\) which implies that, \( R = \frac{ v S + \gamma I + \sigma E}{\mu}, \) hence \(R = 0\) (since \(v, I\) and \(E = 0\)). Thus we have disease free equilibrium for this SEIR model which is given by \(\mathcal{P}_0 = \bigg(\frac{b}{\mu}, 0, 0, 0 \bigg).\)

Note 1. We assumed \(b \ne \mu\) for this model.

2.7. The basic reproduction number \(R_0\)

The basic reproduction number \(R_0\) of an infection can be thought of as the number of cases one measles infection case can generates on average over the course of its infectious period, in an otherwise uninfected (susceptible) population\footnote{Source: Wikipedia}.

We determined the basic reproduction number \(R_0\) by employing the method of next generation matrix. We considered only the two infected classes in our model; the Exposed \(E\) and the Infectious \(I\) compartments. That is \(m = 2\). Define \(G = F V^{-1}\), therefore \(R_0 = \rho (F V^{-1})\) where \(\rho (F V^{-1})\) is called the spectral radius of \(FV^{-1}\). ({The spectral radius is the maximum of the absolute value of the eigenvalues}.)

Let \(H'(x) = \mathcal{F}(x) - \mathcal{V}(x),\) which gives
\begin{eqnarray} \mathcal{F}(x) = \begin{bmatrix} \beta SI \\ 0 \\ \end{bmatrix} \hspace{0.2cm}\text{and} \hspace{0.2cm} \mathcal{V}(x) = \begin{bmatrix} (\mu + \alpha + \sigma) E \\ \alpha E - (\mu +\gamma) I \\ \end{bmatrix}\,. \end{eqnarray}
(23)
Taking the Jacobian of \(\mathcal{F}(x)\) and \(\mathcal{V}(x)\) respectively at the disease free equilibrium \(\mathcal{P}_{0}\), we obtained
\begin{eqnarray} F = \begin{bmatrix} 0 & \beta\frac{ b}{\mu} \\ 0 & 0 \\ \end{bmatrix}, V = \begin{bmatrix} (\mu + \alpha + \sigma) & 0 \\ -\alpha& (\mu + \gamma) \\ \end{bmatrix}, %V^{-1} = \frac{1}{|V|} \begin{bmatrix} %(\mu + \gamma) & 0 \\ %0 & (\mu + \alpha + \sigma) \\ %\end{bmatrix}, \hspace{0.5cm}\text{and}\hspace{0.5cm} V^{-1} = \begin{bmatrix} \frac{1}{(\mu + \alpha + \sigma)} & 0 \\ \frac{\alpha}{(\mu + \alpha + \sigma)(\mu + \gamma)}& \frac{1}{(\mu + \gamma)} \\ \end{bmatrix} \end{eqnarray}
(24)
Set \(k_1 = (\mu + \alpha + \sigma)\) and \(k_2 = (\mu + \gamma)\) such that \[ F V^{-1} = \begin{bmatrix} \frac{b \alpha \beta}{\mu k_{1} k_{2}}& \frac{\beta b}{\mu k_2} \\ 0 & 0 \\ \end{bmatrix}\] and \[ |F V^{-1} - \lambda I_{2}| = \begin{vmatrix} \frac{b \alpha \beta}{\mu k_{1} k_{2}}-\lambda & \frac{b \beta }{\mu k_2} \\ 0 & - \lambda\\ \end{vmatrix} = 0\,.\] Solving we get \[ \lambda^2 - \frac{b \beta \alpha}{\mu (\mu + \alpha+\sigma)(\mu + \gamma)}\lambda = 0\,.\] So that
\begin{eqnarray} R_0 = \rho (F V^{-1}) = \frac{\alpha\beta b }{\mu (\mu + \alpha+\sigma)(\mu + \gamma)}. \end{eqnarray}
(25)
We can easily obtain the value of \(R_0\) using the values provided in Table 3. If \(R_0 < 1\), it implies that the disease will be eradicated from the population with time. Conversely, if \(Ro > 1\), the measles disease is endemic.
Let's define \[ R_0 = \frac{b \beta \alpha}{\mu (\mu + \alpha +\sigma)(\mu + \gamma)} \hspace{0.3cm} \text{where:} \hspace{0.2cm} \mu (\mu + \alpha +\sigma)(\mu + \gamma) \ne 0\,.\]

2.8. Interpretation of \(R_0\)

\(R_0\) is used to explained the fact that the transmission rate, that is, the rate at which exposed individuals will become infected and the contact rate, i.e.; the average number of effective contacts with other (susceptible) individuals per infective individuals per unit time which is relative to the rate at which an infectious individuals will recovered per unit time plays a significant role in determining whether or not the measles epidemic will occur in a given human population. Thus, the disease free equilibrium \((\frac{b}{\mu}, 0, 0, 0)\) is locally asymptotically stable given that \(Ro < 1\), that is, \(b \beta \alpha < \mu (\mu + \alpha +\sigma)(\mu + \gamma). \) Alternatively, if \(Ro > 1\), then the disease free equilibrium is unstable, i.e., the system is said to be uniformly persistent, in other words the measles disease is endemic. Hence, \(R_0\) is a threshold parameter for the model that will determines the number of equilibria.

2.9. Endemic Equilibrium

The next thing in our analysis is to shows that an endemic equilibrium:
\begin{eqnarray} \mathcal{P^*} = (S^* , E^* , I^* , R^*) > 0. \end{eqnarray}
(26)
and we also show that if \(Ro > 1\), the measles disease is endemic. From our equilibrium Equations (19) to (22), considering Equations (20) and (21), we have that: \[\beta SI = (\mu + \alpha + \sigma)E,\;\;\;\;(\mu + \gamma)I = \alpha E.\] Which when we divide the first by the second yields, \[S = S^* = \frac{(\mu + \alpha + \sigma) (\mu + \gamma)}{\alpha}.\] That is, \[S^* = \frac{(\mu + \alpha + \sigma) (\mu + \gamma)}{\alpha},\] which is clearly greater than zero. Similarly, adding Equations (19) and (20), we have \[ (v + \mu) S = b - E (\mu + \alpha + \sigma)\] which implies \begin{equation*} S = \frac{ - (\mu + \alpha + \sigma)E + b}{ (v + \mu)}. \end{equation*} But for \(v = 0 \), we have
\begin{equation} \label{s21} S = \frac{ - (\mu + \alpha + \sigma)E + b}{ \mu}. \end{equation}
(27)
Now, From equation (21), we have
\begin{equation}\label{s22} I = \frac{\alpha E}{(\mu + \gamma)}. \end{equation}
(28)
From Equation (20), we have
\begin{equation}\label{s23} \beta S I = (\mu + \alpha + \sigma)E. \end{equation}
(29)
Now substituting the values of \(S\) and \(I\) from Equations (27) and (28) into Equation (29), we get \[ E \bigg[ \frac{- \beta (\mu + \alpha + \sigma) \alpha E}{ (v + \mu) (\mu + \gamma)} + \frac{\beta \alpha b}{ \mu (\mu + \gamma)} - (\mu + \alpha + \sigma) \bigg] = 0.\] Hence, either \( E = 0\) or\(\bigg( \frac{- \beta \alpha (\mu + \alpha + \sigma) E}{ \mu (\mu + \gamma)} + \frac{\beta \alpha b}{ \mu (\mu + \gamma)} - (\mu + \alpha + \sigma) \bigg) = 0.\) Now,
\begin{eqnarray}\label{1} \frac{- \beta \alpha (\mu + \alpha + \sigma) E}{ \mu (\mu + \gamma)} = (\mu + \alpha + \sigma) - \frac{\beta \alpha b}{\mu (\mu + \gamma)}. \end{eqnarray}
(30)
On dividing both sides of (30) by \(\beta \alpha (\mu + \alpha\ + \sigma)\) and multiplying both sides by \(( \mu (\mu + \gamma)\), we obtain \[ E = \frac{b}{\mu + \alpha + \sigma} - \frac{ \mu (\mu + \gamma)}{\beta \alpha}.\] Therefore, \[ E^* = \frac{b}{\mu + \alpha + \sigma} \bigg[ 1 - \frac{\mu (\mu + \alpha + \sigma)(\mu + \gamma)}{\beta \alpha b} \bigg].\] Recall that, \[R_0 = \frac{\beta \alpha b}{\mu (\mu + \alpha + \sigma)(\mu + \gamma)}\,.\] Now, we have
\begin{equation} E^* = \frac{b}{(\mu + \alpha + \sigma)} \bigg[1 - \frac{1}{R_0} \bigg]. \end{equation}
(31)
Considering \(I\) from Equation (29), that is \[ I = \frac{\alpha E}{(\mu + \gamma)}\,.\] Substituting \(E^*\) for \(E\), we obtain \[ I = \frac{\alpha }{(\mu + \gamma)} \bigg(\frac{b}{(\mu + \alpha + \sigma)} \bigg[ 1 - \frac{\mu (\mu + \alpha + \sigma)(\mu + \gamma)}{\beta \alpha b} \bigg] \bigg).\] So
\begin{equation} I^* = \frac{\alpha b}{(\mu + \gamma)(\mu + \alpha +\sigma)} \bigg[ 1 - \frac{1}{R_0} \bigg]. \end{equation}
(32)
Finally from Equation (22), \[ v S + \gamma I + \alpha E = \mu R.\] So \[R = \frac{ v S + \gamma I + \alpha E}{\mu} .\] On substituting the values of \(S^*\), \(I^*\) and \(E^*\) for \(S\), \(I\) and \(E\) in above, we obtain \[ R = \frac{v (\mu + \alpha + \sigma) (\mu + \gamma)}{\mu \alpha} + \frac{\gamma \alpha b}{\mu (\mu + \gamma)(\mu + \alpha\ + \sigma)} \bigg[ 1 - \frac{1}{R_0} \bigg] + \frac{\sigma b}{\mu (\mu + \alpha + \sigma)} \bigg[ 1 - \frac{1}{R_0} \bigg].\] At initial state, \(v = 0\), so \begin{align*}R &= \frac{\gamma \alpha b}{\mu (\mu + \gamma)(\mu + \alpha\ + \sigma)} \bigg[ 1 - \frac{1}{R_0} \bigg] + \frac{\sigma b}{\mu (\mu + \alpha + \sigma)} \bigg[ 1 - \frac{1}{R_0} \bigg]\\ &= \frac{b}{\mu (\mu + \alpha + \sigma)} \bigg ( \frac{\alpha \gamma}{\mu + \gamma} + \sigma \bigg) \bigg[ 1 - \frac{1}{R_0} \bigg].\end{align*} Multiplying the right hand side by \(\frac{\mu (\mu + \alpha + \sigma)}{b}\), we have
\begin{equation} R^* = \bigg ( \frac{\alpha \gamma}{\mu + \gamma} + \sigma \bigg) \bigg[ 1 - \frac{1}{R_0} \bigg]. \end{equation}
(33)
We have shown that \(S^*\), \(E^*\), \(I^*\) and \(R^*\) are all positives meaning that \(P^* = (S^* , E^* , I^* ,R^* ) > 0. P^*\) represent an endemic steady state with constant number of people in the population being infected with the measles disease and if \(Ro > 1\), the measles disease is endemic. This will be biologically reasonable when \(S^* < N\), that is when \( R_0 = \frac{\beta \alpha b}{\mu (\mu + \alpha + \sigma)(\mu + \gamma)} > 1.\) In another words, the necessary and sufficient condition for a unique \(P^*\) to exist in the feasible region \(\Omega\) is that \( 0 < S^* \leq \frac{b}{\mu}.\) or equivalently \(\frac{b}{\mu} \geq 1.\)

2.10. Local asymptotic stability

We established the local stability of the disease-free equilibrium using Jacobian matrix of Equations (2) to (5) and evaluate at disease free equilibrium \(\mathcal{P}_0\). We achieve this by evaluating the Jacobian matrix of Equations (2) to (5) at \(\mathcal{P}_0 = (\frac{b}{\mu}, 0, 0, 0)\). The local stability of the model is determined from the eigenvalues of the Jacobian matrix of the model equations at \(\mathcal{P}_0\). Thus, the Jacobian matrix of Equations (2) to (5) is given by
\begin{equation}\label{s28} J(f_i, i = 1,\dots,4) = \begin{pmatrix} \frac{\partial f_1}{\partial S} & \frac{\partial f_1}{\partial E} & \frac{\partial f_1}{\partial I} & \frac{\partial f_1}{\partial R} \\ \frac{\partial f_2}{\partial S} & \frac{\partial f_2}{\partial E} & \frac{\partial f_2}{\partial I} & \frac{\partial f_2}{\partial R} \\ \frac{\partial f_3}{\partial S} &\frac{\partial f_3}{\partial E} & \frac{\partial f_3}{\partial I} & \frac{\partial f_3}{\partial R}\\ \frac{\partial f_4}{\partial S} & \frac{\partial f_4}{\partial E} & \frac{\partial f_4}{\partial I} & \frac{\partial f_4}{\partial R} \end{pmatrix} = \begin{pmatrix} -\beta I - (v + \mu) & 0 & -\beta S & 0 \\ -\beta I & -(\mu + \alpha + \sigma) & \beta S & 0 \\ 0 & \alpha & - (\mu + \gamma) & 0 \\ v & \sigma & \gamma & - \mu \end{pmatrix}\,. \end{equation}
(34)
But at initial state \(v = 0\), and at the disease free equilibrium, substituting \(S = \frac{b}{\mu}\), and \(I = 0\) into the Equation (34), we get
\begin{equation} J(\mathcal{P}_0) = \begin{pmatrix} -\mu & 0 & -\beta\frac{b }{\mu} & 0 \\ 0 & -(\mu + \alpha + \sigma) & \beta\frac{b}{\mu} & 0 \\ 0 & \alpha & - (\mu + \gamma) & 0 \\ 0 & \sigma & \gamma & -\mu \end{pmatrix}\,. \end{equation}
(35)
The disease free equilibrium, \(\mathcal{P}_0\), is locally asymptotically stable if all the eigenvalues of \(J(\mathcal{P}_0)\) are \( < 0\). We will establish this by finding the eigenvalues. To find the eigenvalues, take \(|J(\mathcal{P}_0) - \lambda I_4 | = 0\), then \begin{center}
\begin{equation}\label{s31} |J(\mathcal{P}_0) - \lambda I_4 | = \begin{vmatrix} -\mu - \lambda & 0 & -\frac{b \beta}{\mu} & 0 \\ 0 & -(\mu + \alpha + \sigma) - \lambda & \frac{b \beta}{\mu} & 0 \\ 0 & \alpha & - (\mu + \gamma) - \lambda & 0 \\ 0 & \sigma & \gamma & -\mu - \lambda \end{vmatrix} = 0, \end{equation}
(36)
\end{center} implies \[ (\mu + \lambda)^2 \bigg [(\mu + \alpha + \sigma + \lambda)(\mu + \gamma + \lambda + \frac{b \beta \alpha}{\mu}) \bigg] = 0.\] Then
\begin{eqnarray}\label{solutio} \text{either} \hspace{0.2cm} (\mu + \lambda)^2 = 0, \hspace{0.2cm}\text{or}\hspace{0.2cm} \bigg [(\mu + \alpha + \sigma + \lambda)(\mu + \gamma + \lambda + \frac{b \beta \alpha}{\mu}) \bigg] = 0\,. \end{eqnarray}
(37)
Simplify (37) gives \(\lambda_1 = -\mu,\hspace{.2cm} \lambda_2 = -\mu,\hspace{.2cm} \lambda_3 = -(\mu + \sigma + \alpha) , \lambda_4 = -\bigg( \frac{b \beta \alpha}{\mu} + \mu + \gamma \bigg).\) Since the eigenvalues for \(J(\mathcal{P}_0)\) are negative so the disease free equilibrium is locally asymptotically stable.

2.11. Application of \(R_0\) to measles in Nigeria.

A deterministic compartmental mathematical model for measles has been formulated with the aim of the study of the effects of heterogeneous mixing, transmission and control (or elimination) of the disease in Nigeria. Moreover, the basic reproduction number, \(R_0\), has been derived, which is the threshold parameter. Recall that
\begin{eqnarray} R_0 = \frac{\alpha\beta b }{\mu (\mu + \alpha+\sigma)(\mu + \gamma)}. \end{eqnarray}
(38)
Table 3 gives the model parameters we used for our model and its values and sources.
Table 3. Model parameter values used and Sources.
Parameter Value Source
\(b\) 0.03691 [17] and [18]
\(v\) 0.0, 0.25, 0.50 \& 0.75 Assumed
\(\mu\) 0.01241 [19] and [18]
\(\beta\) 0.09091 per day Immunization Action Coalition
\(\gamma\) 0.125 per day [20]
\(\sigma\) 0.0, 0.25, 0.50 \& 0.75 Assumed
\(\alpha\) 0.14286 per day [20]
We will substitute the values of \(\alpha\), \(\beta\), \(b\), \(\mu\), \(\alpha\), \(\sigma\) and \(\gamma\) into \(R_0\) to investigate the behaviour of \(R_0\) with the given parameters in Table 3 which are pertinent to measles in Nigeria. It is easy to check that when \(\sigma = 0\) we have \( R_0 = 1.8105;\) when \(\sigma = 0.25\) we have \( R_0 = 0.69364\); when \(\sigma = 0.50\), We have \( R_0 = 0.42900\) and finally, when \(\sigma = 0.75\) we have \( R_0 = 0.31053 .\)
In summary, when \(\sigma = 0\) we have \(R_0 > 1\) and when \(\sigma \le 1\) we have \(R_0 < 1\). This illustrate the fact that \(\sigma\), which is the rate of exposed individuals who have undergone testing and therapy, will play a significant role in controlling (and eliminating) of the disease. Thus, the disease free equilibrium \((\frac{b}{\mu}, 0, 0, 0)\) is locally asymptotically stable when \(Ro < 1\) for \(\sigma \le 1\) , that is, \(b \beta \alpha < \mu (\mu + \alpha +\sigma)(\mu + \gamma). \) Alternatively, when \(\sigma = 0\), \(Ro > 1\), then the disease-free equilibrium is unstable, i.e., the measles disease will be endemic in the population. Hence, we have established the fact that \(R_0\) is a threshold parameter for the model that will determine the number of equilibria.

3. Numerical results

In this section, we consider the explicit fourth-order Runge-Kutta (RK4) scheme for solving numerically the non-linear first order ordinary differential equations (ODEs) of our SEIR model of Equations (6) with given initial conditions. Here, numerical simulations are used to show the impact of vaccination and therapy using the RK4 scheme on the state equations. Details of the RK4 are discussed in [15]. State equations are solved over a simulated period of time using the RK4 scheme. The parameters used in the simulation are in the Table 3. The simulation results are depicted in Figures 2 to 5.

Figure 2. Measles dynamics is shown when \(\sigma\) = 0.0 and v = 0.0 (left) and when \(\sigma\) = 0.25 and v = 0.0 (right)

Figure 3. Measles dynamics is shown when \(\sigma\) = 0.25 and v = 25 (left) and when \(\sigma\) = 0.50 and v = 0.25 (right)

Figure 4. Measles dynamics is shown when \(\sigma\) = 0.50 and v = 0.50 (left) and when \(\sigma\) = 0.75 and v = 0.50 (right)

Figure 5. Measles dynamics is shown when \(\sigma\) = 0.50 and v = 0.75 (left) and when \(\sigma\) = 0.75 and v = 0.75 (right)

3. 1. Interpretation of Simulation Results

In Figure 2 we present the dynamics of measles disease when \(\sigma = 0.0\) and \(v = 0 \) and when \(\sigma = 0.25\) and \(v = 0\). We can see in the Figure 2 that, if none of the exposed individuals at the latent period are diagnosed and treated, and there is no control measure been introduced into the susceptible class. It will take a much longer time for exposed individuals to decrease. Likewise, it takes a significantly longer period before we notice any significant improvement for individuals to recovered from the measles disease. Similarly, infective individuals will significantly increase before noticing a drop in the number of infective individuals. On the order hand, when \(25\%\) of the exposed individuals at the latent period are diagnosed and treated, there is a clear improvement in the result.

In Figure 3, we do the simulation of the model when \(\sigma = 0.25\) and \(v = 25\) and when \(\sigma = 0.50\) and \(v = 0.25\). It can be observed from Figure 3 that if \(25\%\) of susceptible individuals are vaccinated in addition to increases of \(25\%\) to \(50\%\) of the exposed individuals at the latent period who are diagnosed and treated. It will take lesser time for exposed individuals to decrease significantly. Also, it takes a significant lesser period before noticing any significant improvement for individuals to recovered from the disease. Similarly, infective individuals will significantly decline over time.

Figure 4, above shows the dynamics of measles disease when \(\sigma = 0.50\) and \(v = 50\) and when \(\sigma = 0.75\) and \(v = 0.50\). From the simulation result of the Figure 4, if \(50\%\) of susceptible individuals are vaccinated in addition to increases of \(50\%\) to \(75\%\) of the exposed individuals at the latent period, who are diagnosed and treated,. There is a sudden decline of exposed individuals with time. This shows a significantly improved result in comparison with what we have in the two figures above. Also, it takes a significant much lesser period for individuals to recover from the disease. Similarly, infective individuals will significantly take a lesser period to go down over time.

Figure 5 shows the simulation of the model when \(\sigma = 0.50\) and \(v = 75\) and when \(\sigma = 0.75\) and \(v = 0.75\). Looking at the Figure 5 closely, we discovered that if \(75\%\) of exposed individuals at the latent period are diagnosed and treated. In addition to increases of \(50\%\) to \(75\%\) of susceptible individuals who are vaccinated,  we have a better result in comparison to the previous three results stated above.

Conclusion

The SEIR model showed significant success in attempting to predict the causes of measles disease transmission within a given population. The model strongly indicated that the spread of disease largely depends on the contact rates of susceptible individuals with infected individuals within a population. With the assumed values for our state variables, we modelled a measles disease in Nigeria using deterministic SEIR model to investigate the impact the control measures can have on susceptibles, as well as exposed individuals at latent period, over the entire population dynamics in controlling and eliminating the disease. Consequently, we established the existence of a solution, uniqueness of the solution, and stability of the solution. The application of \(R_0\) to measles data of Nigeria was also given. Finally, numerical simulations of the model with the RK4 scheme was well presented.

Acknowledgments

We appreciate the support given to us by African Institute for Mathematical Sciences (AIMS), Senegal. All the authors are well appreciated for their contributions.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Measles transmission mode. (October 2018). Retrieved from https://www.cdc.gov/measles/about/transmission. html.
  2. Measles disease and transmission mode. (September 2018). Retrieved from https://www.ncdc.gov.ng/diseases /info/M.
  3. Immunization Misconceptions. (September 2019). Retrieved from https://www.who.int/vaccine\_safety/initiative/ detection/immunization\_misconceptions/en/
  4. Measles Outbreak in Nigeria. (2019). Retrieved from https://reliefweb.int/report/nigeria/nigeria-measles-outbreak- dg-echo-who-ncdc-ngos-echo-daily-flash-16-march-2019.
  5. Hethcote, H. W. (2000). The mathematics of infectious diseases. SIAM review, 42(4), 599-653.[Google Scholor]
  6. Bakare, E. A., Adekunle, Y. A., & Kadiri, K. O. (2012). Modelling and Simulation of the Dynamics of the Transmission of Measles. International Jounal of Computer Trends and Technology, 3, 174-178.[Google Scholor]
  7. Bolarian, G. (2014). On the dynamical analysis of a new model for measles infection. International Journal of Mathematics Trends and Technology, 7(2), 144-155.[Google Scholor]
  8. Fred, M. O., Sigey, J. K., Okello, J. A., Okwoyo, J. M., & Kang'ethe, G. J. (2014). Mathematical modeling on the control of measles by vaccination: Case Study of KISII County, Kenya. The SIJ Transactions on Computer Science Engineering and Its Applications (CSEA), 2, 61-69.[Google Scholor]
  9. Roberts, M. G., & Tobias, M. I. (2000). Predicting and preventing measles epidemics in New Zealand: application of a mathematical model. Epidemiology & Infection, 124(2), 279-287.[Google Scholor]
  10. Momoh, A. A., Ibrahim, M. O., Uwanta, I. J., & Manga, S. B. (2013). Mathematical model for control of measles epidemiology. International Journal of Pure and Applied Mathematics, 87(5), 707-717.[Google Scholor]
  11. Tessa, O. M. (2006). Mathematical model for control of measles by vaccination. In Proceedings of Mali Symposium on Applied Sciences (Vol. 2006, pp. 31-36).[Google Scholor]
  12. Momoh, A. A., Ibrahim, M. O., Uwanta, I. J., & Manga, S. B. (2013). Mathematical model for control of measles epidemiology. International Journal of Pure and Applied Mathematics, 87(5), 707-717.[Google Scholor]
  13. Ochoche, J. M., & Gweryina, R. I. (2014). A mathematical model of measles with vaccination and two phases of infectiousness. IOSR Journal of Mathematics, 10(1), 95-105.[Google Scholor]
  14. Verguet, S., Johri, M., Morris, S. K., Gauvreau, C. L., Jha, P., & Jit, M. (2015). Controlling measles using supplemental immunization activities: a mathematical model to inform optimal policy. Vaccine, 33(10), 1291-1296.[Google Scholor]
  15. Sowole, S. O., Sangare, D., Ibrahim, A. A., & Paul, I. A. (2019). On the existence, uniqueness, stability of solution and numerical simulations of a mathematical model for measles disease, International Journal of Advances in Mathematics, 2019(4), 84-111.[Google Scholor]
  16. Nigeria-population: Worldometers on world population. (October 2019). https://www.worldometers.info/world- population/nigeria-population/
  17. Nigeria birth rate from index mundi. (September 2019). Retrieved from https://www.indexmundi.com/ nigeria/birth\_rate.html
  18. Nigeria Migration Profile. (September 2019). Retrieved from https://www.dailytrust.com.ng/more-foreign-visitors- troop-into-nigeria-data.html.[Google Scholor]
  19. Nigeria death rate from index mundi. (September 2019) . Retrieved from https://www.indexmundi.com/nigeria /death\_rate.html.
  20. Trottier, H., & Philippe, P. (2003). Deterministic modeling of infectious diseases: measles cycles and the role of births and vaccination. The Internet Journal of Infectious Diseases, 1(2), https://print.ispub.com/api/0/ispub-article/7099.[Google Scholor]
]]>
Optimal polynomial decay for a coupled system of wave with past history https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/optimal-polynomial-decay-for-a-coupled-system-of-wave-with-past-history/ Fri, 10 Apr 2020 17:58:39 +0000 https://old.pisrt.org/?p=4017
OMA-Vol. 4 (2020), Issue 1, pp. 49 - 59 Open Access Full-Text PDF
S. M. S. Cordeiro, R. F. C. Lobato, C. A. Raposo
Abstract: This work deals with a coupled system of wave with past history effective just in one of the equations. We show that the dissipation given by the memory effect is not strong enough to produce exponential decay. On the other hand, we show that the solution of this system decays polynomially with rate \(t^{-\frac{1}{2}}\). Moreover by recent result due to A. Borichev and Y. Tomilov, we show that the rate is optimal. To the best of our knowledge, there is no result for optimal rate of polynomial decay for coupled wave systems with memory in the previous literature.
]]>

Open Journal of Mathematical Analysis

Optimal polynomial decay for a coupled system of wave with past history

S. M. S. Cordeiro, R. F. C. Lobato, C. A. Raposo\(^1\)
Faculty of Exact Sciences and Technology Federal University of Pará 68440-000, Abaetetuba, PA, Brazil.; (S.M.S.C & R.F.C.L)
Federal University of São João del-Rey and PhD Program of the Federal University of Bahia 40170-110, Salvador, BA, Brazil.; (C.A.R)
\(^1\)Corresponding Author: hakemali@yahoo.com

Abstract

This work deals with a coupled system of wave with past history effective just in one of the equations. We show that the dissipation given by the memory effect is not strong enough to produce exponential decay. On the other hand, we show that the solution of this system decays polynomially with rate \(t^{-\frac{1}{2}}\). Moreover by recent result due to A. Borichev and Y. Tomilov, we show that the rate is optimal. To the best of our knowledge, there is no result for optimal rate of polynomial decay for coupled wave systems with memory in the previous literature.

Keywords:

Coupled system of waves equation, polynomial decay, memory, optimality.

1. Introduction

In this paper we consider a coupled system of wave with past history given by

\begin{equation}\label{1eq1-1} u_{tt}-\Delta u+\int^{\infty}_0g(s)\Delta u(t-s)\;ds+\alpha v=0 \;\;in\;\; \Omega \times (0,\infty), \end{equation}
(1)
\begin{equation} \label{1eq1-2} v_{tt}-\Delta v+\alpha u=0 \quad \mbox{in}\quad \Omega \times (0,\infty), \end{equation}
(2)
\begin{equation} \label{1eq1-3}u=v=0\quad \mbox{on}\quad \Gamma \times (0,\infty), \end{equation}
(3)
\begin{equation} \label{1eq1-4}(u(x,0),v(x,0))=(u_0(x),v_0(x)),\quad \mbox{in}\quad \Omega, \end{equation}
(4)
\begin{equation} \label{1eq1-5} (u_t(x,0),v_t(x,0))=(u_1(x),v_1(x)), \quad \mbox{in}\quad \Omega, \end{equation}
(5)
where \(\Omega\) is an open bounded set of \(\mathbb{R}^n\) with smooth boundary \(\Gamma\).

The above model can be used to describe the evolution of a system consisting of two elastic membranes subject to an elastic force that attracts one membrane to the other with coefficient \(\alpha >0\). Note that the term \(\int^{\infty}_0g(s)\Delta u(t-s)\;ds\), acts on the first membrane as a stabilizer.

Many interesting physical phenomena such as viscoelasticity, hereditary polarization in dielectrics, population dynamics or heat flow in real conductors, to name some, are modeled by differential equations which are influenced by the past values of one or more variables in play so-called equations with memory. The main problem in the analysis of equations of this kind lies in their nonlocal character, due to the presence of the memory term given by the time convolution of the unknown function against a suitable memory kernel. The memory term may can produce loss of exponential stability for the system, [1]. The history of nonlocal problems with integral conditions for partial differential equations is recent and goes back to [2]. In [3], a review of the progress in the nonlocal models with integral type was given with many discussions related to physical justifications, advantages, and numerical applications.

Coupled wave system has been considered in various contexts. In [4] both wave equations are damped on the boundary and the coupling is effected by compact operator and exponential stability is obtained when the boundary damping is linear. Boundary damping is also considered in [5, 6]. On exact boundary controllability for linearly coupled wave equations, we refer [7]. Uniform exponential stability was given in [8] for wave equations coupled in parallel with coupling distributed springs and viscous dampers due to different boundary conditions and wave propagation speeds.

For weak damping acting only one equation, the optimal polynomial decay to coupled wave equations was studied in [9]. In [10], it was proved that the energy of associated coupled system weakly dissipative decays polynomially with explicit polynomial decay rates for sufficiently smooth solutions. In [11], under new compatibility assumptions, the authors proved polynomial decay for the energy of solutions and optimized previous results by interpolation techniques introduced in [10].

On the asymptotic behavior of the coupled system (1)-(5) we refer the work [12] where the authors proved by method introduced in [11] that the solution has a polynomial rate of decay. The central question of this work is to analyze what is the best decay rate of the system (1)-(5). In this direction, we prove that the associated semigroup decays with rate \(t^{-\frac{1}{2}}\). Moreover we show that the rate is optimal. For what we know in the literature the optimal rate of polynomial decay for coupled wave systems with memory was not previously considered.

The mathematical structure of the paper is organized as follows: In Section 2 we discuss the existence, regularity and uniqueness of strong solutions of the system (1)-(5) by semigroup technique, see [13]. In Section 3 we study the lack of exponential decay using Prüss's results [14]. Finally in section 4 we show that the system is polynomially stable giving an optimal decay rate. That is, this rate cannot be improved. For this we use the recent result due to Borichev and Tomilov [15].

2. Semigroup Setup

Following the approach of Dafermos [16] and Fabrizio and Morro [17], we consider \(\eta =\eta ^{t}(s)\), the relative history of \(u\), defined as
\begin{eqnarray}\label{etacond} \eta =\eta ^{t}(s)=u (t)-u (t-s). \end{eqnarray}
(6)
Hence, putting \[ \beta_0=1-\int_0^{\infty}g(s)\;ds>0, \] the system (1)-(5) turns into the system
\begin{equation}\label{1eq1-6} u_{tt}-\beta_0 \Delta u -\int^{\infty}_0g(\tau)\Delta \eta(\cdot,\tau) \;d\tau+\alpha v=0 \quad \mbox{in}\quad \Omega \times (0,\infty), \end{equation}
(7)
\begin{equation} \label{1eq1-7} v_{tt}-\Delta v +\alpha u=0 \quad \mbox{in}\quad \Omega \times (0,\infty), \end{equation}
(8)
\begin{equation} \label{1eq1-8} \eta_t+\eta_s-u_t=0,\quad \mbox{in}\quad \Omega \times (0,\infty) \end{equation}
(9)
\begin{equation} \label{1eq1-9} u=v=\eta^t(s)=0\quad \mbox{on}\quad \Gamma \times (0,\infty),\;\forall s\geq 0 \end{equation}
(10)
\begin{equation} \label{1eq1-10} (u(x,0),v(x,0))=(u_0(x),v_0(x))\quad \mbox {in}\quad \Omega, \end{equation}
(11)
\begin{equation} \label{1eq1-11} (u_t(x,0),v_t(x,0)=(u_1(x),v_1(x))\quad \mbox {in}\quad \Omega, \end{equation}
(12)
\begin{equation} \label{1eq1-12} \eta_0(\cdot,s)=u_0(\cdot,0)-u_0(\cdot,-s),\quad \Omega \times (\infty), \end{equation}
(13)

where the third equation is obtained differentiating (6) with respect to \(s\) and the condition (13) means that the history is considered as an initial value.

We study the existence and uniqueness of solutions for the system (7)-(13) using the semigroup techniques. As in [18], we use the following hypotheses on \(g\)

\begin{eqnarray}\label{hipg} g\in C^1(\mathbb{R}^+)\cap L^1(\mathbb{R}^+),\;g(t)>0,\; \exists\;\; q_{0},\;q_1>0:\; -q_0g(t)\leq g'(t)\leq -q_1g(t),\;\forall t\geq 0. \end{eqnarray}
(14)
In view of (14), let \(L^2_g(\mathbb{R}^+;H^1_0(\Omega))\) be the Hilbert space of \(H^1_0(\Omega)\)-value functions on \(\mathbb{R}^+\), endowed with the inner product \[ (f,h)_{L^2_g(\mathbb{R}^+,H^1_0(\Omega))}=\int^{\infty}_0g (s)\int_{\Omega}\nabla f(x,s) \cdot \nabla\overline{h(x,s)} \; dx \; ds. \] To give an accurate formulation of the evolution problem we introduce the product Hilbert spaces \[ \mathcal{H}=H^1_0(\Omega)\times L^2(\Omega)\times H^1_0(\Omega)\times L^2(\Omega)\times L^2_g(\mathbb{R}^+;H^1_0(\Omega)) \] endowed with the following inner product
\begin{eqnarray}\label{2eq2-PH} \langle U,V\rangle &=&\beta_0 \int_{\Omega} \nabla u_1\cdot \nabla \overline{v_1}\;dx+\int_{\Omega}u_2\overline{v_2}\; dx+\int_{\Omega} \nabla u_3\cdot \nabla \overline{v_3}\;dx+\int_{\Omega}u_4\overline{v_4}\; dx\nonumber \\ &&+\alpha \int_{\Omega}(u_1\overline{v_3}+u_3\overline{v_1})\;dx+\int^{\infty}_0g (s)\int_{\Omega}\nabla u_5(x,s) \cdot \nabla \overline{v_5}(x,s) \; dx \; ds, \end{eqnarray}
(15)
where \(U=(u_1,u_2,u_3,u_4,u_5)^T\), \(V=(v_1,v_2,v_3,v_4,v_5)^T\in \mathcal{H}\).
Let \(U=(u,u_t,v,v_t,\eta)^T\) be and we define the operator \(\mathcal{A}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}\) given by \begin{eqnarray*} \begin{array}{c} \mathcal{A}=\left[ \begin{array}{ccccc} \ 0 & I & 0 & 0 & 0 \\ \beta_0 \Delta & 0 & -\alpha I & 0 & \mathcal{T} \\ \ 0 & 0 & 0 & I & 0 \\ -\alpha I & 0 & \Delta & 0 & 0 \\ \ 0 & I & 0 & 0 & -(\cdot)_s \\ \end{array} \right] \end{array} \end{eqnarray*} with domain \begin{eqnarray*} D(\mathcal{A})&=&\{ (u,\varphi,v,\psi,\eta)^T\in \mathcal{H}; \quad \beta_0 u-\int^{\infty}_0g(s)\eta(s) \; ds \in H^1_0(\Omega)\cap H^2(\Omega),\\ &&\varphi \in H^1_0(\Omega), v\in H^1_0(\Omega)\cap H^2(\Omega), \psi \in H^1_0(\Omega), \eta \in D(\mathcal{T}) \} \end{eqnarray*} where \[ \mathcal{T}\eta = \int^{\infty}_0g(s)\Delta \eta (s) \; ds, \quad \forall \eta \in D(\mathcal{T}) \] with \[ D(\mathcal{T})=\{ \eta \in L^2_g(\mathbb{R}^+;H^1_0(\Omega));\eta_s\in L^2_g(\mathbb{R}^+;H^1_0(\Omega)), \eta(0)=0\}, \] where \(\eta_s\) is the distributional derivative of \(\eta\) with respect to the internal variable \(s\). Therefore, the system (7)-(13) is equivalent to
\begin{eqnarray}\label{2eq2-9} \frac{dU}{dt} &=&\mathcal{A} U \end{eqnarray}
(16)
\begin{eqnarray} \label{2eq2-10} U(0) &=& U_0, \end{eqnarray}
(17)
with \(U=(u,u_t,v,v_t,\eta)^T\), \(U_0=(u_0,u_1,v_0,v_1,\eta_0)^T\). With the above notations, we have the following result.

Theorem 1. The operator \(\mathcal{A}\) generate a C\(_0\)-semigroup \(S(t)\) of contraction on \(\mathcal{H}\). Thus, for any initial data \(U_0\in \mathcal{H}\), the problem (7)-(13) has a unique weak solution \(U(t)\in C^0([0,\infty[, \mathcal{H})\). Moreover, if \(U_0\in D(\mathcal{A})\), then \(U(t)\) is strong solution of (7)-(13), that is, \(U(t)\in C^1([0,\infty[,\mathcal{H})\cap C^0([0,\infty[,D(\mathcal{A}))\).

Proof. It is easy to see that \(D(\mathcal{A})\) is dense in \(\mathcal{H}\). Now, for \(U=(u, u_t,v, v_t,\eta)^T\in D(\mathcal{A})\) and using the inner product (15), we get \begin{align*} &\langle {\mathcal{A}}U,U\rangle =\beta_0\int_{\Omega}\nabla u_t\cdot \nabla \overline{u}\;dx+\int_{\Omega}(\beta_0\Delta u-\alpha v+\int^{\infty}_0g(s) \Delta \eta(s)\;ds)\overline{u_t}\;dx\\ &\;\;\;+\int_{\Omega}\nabla v_t\cdot \nabla \overline{v}\;dx+\int_{\Omega}(\Delta v-\alpha u)\overline{v_t}\;dx+\alpha \int_{\Omega}(u_t\overline{v}+v\overline{u_t})\;dx+\int^{\infty}_0g(s)\int_{\Omega}\nabla(u_t-\eta_s(s))\cdot\nabla \overline{\eta} (s)\;dx\;ds \end{align*} from where it follows that \begin{eqnarray*} \langle {\mathcal{A}}U,U\rangle=-\int^{\infty}_0g(s)\int_{\Omega}\nabla \eta_s(s)\cdot \nabla \overline{\eta}(s)\;dx\;ds. \end{eqnarray*} Integrating by parts and using (14), we have \begin{eqnarray*} {\mathcal{R}}e\langle {\mathcal{A}}U,U\rangle=\frac{1}{2}\int^{\infty}_0g'(s)\int_{\Omega}|\nabla \eta(s)|^2\;dx\;ds\leq -\frac{q_1}{2}\int^{\infty}_0g(s)\int_{\Omega}|\nabla \eta(s)|^2\;dx\;ds\leq 0. \end{eqnarray*} Therefore, \(\mathcal{A}\) is a dissipative operator.
Next, we show that \((I-\mathcal{A})\) is maximal. For this, let us consider the equation \[ (I-\mathcal{A})U=F \] where \(U=(u,\varphi,v,\psi,\eta)^T\) and \(F=(f^1,f^2,f^3,f^4,f^5)^T\in \mathcal{H}\). Then, in terms of its components, we can write

\begin{eqnarray}\label{res1} u-\varphi &=& f^1, \end{eqnarray}
(18)
\begin{eqnarray} \label{res2} \varphi-\beta_0\Delta u+\alpha v-\int^{\infty}_0g(s)\Delta \eta(s)\;ds&=& f^2, \end{eqnarray}
(19)
\begin{eqnarray} \label{res3} v-\psi &=& f^3, \end{eqnarray}
(20)
\begin{eqnarray} \label{res4} \psi-\Delta v+\alpha u &=& f^4, \end{eqnarray}
(21)
\begin{eqnarray} \label{res5} \eta -\varphi +\eta_s &=& f^5. \end{eqnarray}
(22)
Integrating (22), we have
\begin{eqnarray}\label{res6} \eta(\cdot,s)=\varphi(\cdot)(1-e^{-s})+\int^s_0e^{\tau-s}f^5(\cdot,\tau)\;d\tau. \end{eqnarray}
(23)
Substituting \(\varphi\) and \(\eta\) from (18) and (23) into (19), we get
\begin{eqnarray}\label{res7} u-\beta_g \Delta u+\alpha v=f^1+f^2+\int^{\infty}_0g(s)\left[(e^{-s}-1)\Delta f^1+\int^s_0e^{\tau - s}\Delta f^5(\tau)\;d\tau\right]\;ds \end{eqnarray}
(24)
where \[ \beta_g=\beta_0+\int^{\infty}_0g(s)(1-e^{-s})\;ds. \] Note that \(\beta_g\) is a positive constant in virtue of (14). Moreover, it can be shown that the right-had side of (24) is in \(H^{-1}(\Omega)\).
On the other hand, the substitution of \(\psi\) given in (20) into (21) gives us
\begin{eqnarray}\label{res8} v-\Delta v+\alpha u=f^3+f^4. \end{eqnarray}
(25)
First we prove that \(u, v \in H^1_0(\Omega)\). To do this, let us consider the bilinear form
\begin{eqnarray}\label{res9} a(\Phi_1,\Phi_2)&=&\int_{\Omega}u_1u_2\;dx+\int_{\Omega}v_1v_2\;dx+\beta_g\int_{\Omega}\nabla u_1\cdot \nabla u_2\;dx\nonumber \\ &+&\int_{\Omega}\nabla v_1\cdot \nabla v_2\;dx+\alpha \int_{\Omega}(v_1u_2+u_1v_2) \end{eqnarray}
(26)
where \(\Phi_1=(u_1,v_1)\) and \(\Phi_2=(u_2,v_2)\). Then, Lax-Milgram theorem (see [19]) provides existence and uniqueness of the solutions \[ u,v\in H^1_0(\Omega). \] From (18) and (20), we have \(\varphi,\psi \in H^1_0(\Omega)\). Now, from (23), we obtain
\begin{eqnarray}\label{res10} ||\eta||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))}\leq C\left(||\varphi||^2_{H^1_0(\Omega)}+||f^5||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))}\right), \end{eqnarray}
(27)
from where it follows that \[ \eta \in L^2_g(\mathbb{R}^+;H^1_0(\Omega)). \] From (19), we get \[ \beta_0 \Delta u+ \int^{\infty}_0g(s)\Delta \eta(s)\;ds \in L^2(\Omega). \] On the other hand, from (22), we obtain \begin{eqnarray*} ||\eta_s||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))}\leq C\left(||\varphi||^2_{H^1_0(\Omega)}+||f^5||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))} +||\eta||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))}\right)\,. \end{eqnarray*} From where it follows that \[ \eta_s\in L^2_g(\mathbb{R}^+;H^1_0(\Omega)). \] Again from (23), we have \[ \eta(0)=0. \] Thus, \(I-\mathcal{A}\) is maximal. Then, thanks to the Lumer-Phillips theorem (see [13], Theorem 4.3), the operator \(\mathcal{A}\) generates a C\(_0\)-semigroup of contractions \(e^{t\mathcal{A}}\) on \(\mathcal{H}\). The proof is now complete.

3. Lack of exponential decay

Our starting point is to show that the semigroup associated to the system (7)-(13) is not exponential stable. To show this, we assume that \(g(t)=e^{-\mu t}\), with \(t\in \mathbb{R}^+\) and \(\mu>1\). We will use the Prüss's theorem [14] to prove the lack of exponential stability.

Theorem 2. Let \(S(t)=e^{\mathcal{A}t}\) be a C\(_0\)-semigroup of contractions on Hilbert space. Then \(S(t)\) is exponentially stable if and only if \[ \rho(\mathcal{A})\supseteq \{i\beta:\beta\in \mathbb{R}\}\equiv i\mathbb{R} \] and \[ \overline{\lim_{|\beta|\rightarrow \infty}}\|(i\beta-\mathcal{A})^{-1}\|< \infty \] hold, where \(\rho(\mathcal{A})\) is the resolvent set of \(\mathcal{A}\).

To do this, let us consider the spectral problem:
\begin{eqnarray}\label{3eq3-31} \left\{ \begin{array}{c} -\Delta w_m = \lambda_{m}w_{m} \quad \mbox{in}\quad \Omega\\ w_{m}=0\quad \mbox{on}\quad \Gamma, \end{array} \right. \end{eqnarray}
(28)
where \[ \lim_{m \rightarrow \infty}\lambda_{m}=+\infty. \] The following theorem describes the main results of this section.

Theorem 3. Let \(S(t)\) be C\(_0\)-semigroup of contractions generated by \(\mathcal{A}\). Then \(S(t)\) is not exponentially stable.

Proof. Here we will use the Theorem 2. That is, we will show that there exists a sequence of values \(\lambda_{m}\) such that

\begin{eqnarray}\label{3eq3-32} ||(\lambda_{m}-\mathcal{A})^{-1}||_{\mathcal{L}(\mathcal{H})}\rightarrow \infty. \end{eqnarray}
(29)
It is equivalent to prove that there exist a sequence of data \(F_m\in {\mathcal{H}}\) and a sequence of complex numbers \(\lambda_m\in i\mathbb{R}\), with \(||F_m||_{\mathcal{H}}\leq1\) such that
\begin{eqnarray}\label{3eq3-33} ||(\lambda_mI-\mathcal{A})^{-1}F_m||_{\mathcal{H}}\rightarrow \infty \end{eqnarray}
(30)
where
\begin{eqnarray}\label{3eq3-34} \lambda_mU_m-\mathcal{A}U_m=F_m \end{eqnarray}
(31)
with \(U_m\) not bounded. To simplify the notation we will omit the subindex \(m\). Then, the Equation (31) becomes
\begin{eqnarray}\label{3eq3-2} \left\{ \begin{array}{c} i\lambda u - \varphi = f^1,\\ i\lambda \varphi -\beta_0 \Delta u +\alpha v -\int^{\infty}_0g(s)\Delta \eta(x,s)ds= f^2,\\ i\lambda v - \psi = f^3,\\ i\lambda \psi - \Delta v +\alpha u = f^4,\\ i\lambda \eta - \varphi +\eta_s=f^5. \end{array} \right. \end{eqnarray}
(32)
Let us consider \(f1=f^3=f^5=0\) and \(f^2=f^4=w_m\) to obtain \(\varphi=i\lambda u\) e \(\psi=i\lambda v\). Then, the system (32) becomes
\begin{eqnarray}\label{3eq3-3} \left\{ \begin{array}{c} -\lambda^2u -\beta_0 \Delta u +\alpha v -\int^{\infty}_0g(s)\Delta \eta(x,s)ds= w_m,\\ -\lambda^2v - \Delta v +\alpha u = w_m,\\ i\lambda \eta +\eta_s-i\lambda u=0. \end{array} \right. \end{eqnarray}
(33)
We look for solutions of the form \[ u=aw_m, \quad v=bw_m, \quad \varphi=cw_m,\quad \psi=dw_m,\quad \eta(x,s)=\gamma(s)w_m \] with \(a,b,c,d\in \mathbb{C}\) and \(\gamma(s)\) depend on \(\lambda\) and will be determined explicitly in the sequel. From (33), we get \(a\) and \(b\) satisfy
\begin{eqnarray}\label{3eq3-4} \left\{ \begin{array}{c} -\lambda^2a +\beta_0 a \lambda_m+\alpha b+\lambda_m\int^{\infty}_0g(s)\gamma(s)ds= 1,\\ -\lambda^2b + \lambda_mb+\alpha a = 1,\\ \gamma_s+i\lambda \gamma -i\lambda a=0. \end{array} \right. \end{eqnarray}
(34)
Solving (34)\(_3\) we get
\begin{eqnarray}\label{3eq3-5} \gamma(s)=Ce^{-i\lambda s}+a. \end{eqnarray}
(35)
Since \(\eta(0)=0\) then \(C=-a\), and (35) becomes
\begin{eqnarray}\label{3eq3-6} \gamma(s)=a-ae^{-i\lambda s}. \end{eqnarray}
(36)
Then, from (36) we have
\begin{eqnarray}\label{3eq3-7} \int^{\infty}_0g(s)\gamma(s)\;ds=\int^{\infty}_0g(s)(a-ae^{-i\lambda s})\;ds=ab_0-a\int^{\infty}_0g(s)e^{-i\lambda s}\;ds \end{eqnarray}
(37)
where \[ b_0=\int^{\infty}_0g(s)\;ds. \] Now, choosing \(\lambda=\sqrt{\lambda_m}\), using the equation (34)\(_1\) and (34)\(_2\) we obtain \begin{eqnarray*} &&a=\frac{1}{\alpha}, \\ &&b=\frac{\lambda_m(1-\beta_0)}{\alpha^2}-\frac{\lambda_m}{\alpha}\int^{\infty}_0g(s)\gamma(s)\;ds+\frac{1}{\alpha}, \\ &&c=i\frac{\sqrt{\lambda_m}}{\alpha},\\ &&d=i\sqrt{\lambda_m}(\frac{\lambda_m(1-\beta_0)}{\alpha^2} -\frac{\lambda_m}{\alpha}\int^{\infty}_0g(s)\gamma(s)\;ds+\frac{1}{\alpha}). \end{eqnarray*} Recalling that \[ \varphi=cw_m=i\frac{\sqrt{\lambda_m}}{\alpha}w_m \] we get \[ ||\varphi||^2_{L^2(\Omega)}=\frac{\lambda_m}{\alpha^2}. \] Therefore we have \[ \lim_{m\rightarrow \infty}||U_m||^2_{\mathcal{H}}\geq \lim_{m\rightarrow \infty}||\varphi||^2_{L^2(\Omega)}=\lim_{m\rightarrow \infty}\frac{\lambda_m}{\alpha^2}= \infty. \] Using theorem 3 follows that \(S(t)\) is not exponentially stable. The proof is now complete.

4. Polynomial decay and optimally result

In this section we study the polynomial decay associated to the system (7)-(13) and subsequently we find the optimal rate of decay. Then, let us consider the resolvent equation \[ (i\lambda I- \mathcal{A})U=F,\quad \mbox{with}\quad \lambda \in \mathbb{R} \quad \mbox{and}\quad F\in \mathcal{H}, \] that is,
\begin{eqnarray}\label{sem1} i\lambda u-\varphi&=&f^1, \end{eqnarray}
(38)
\begin{equation} \label{sem2} i\lambda \varphi-\beta_0\Delta u+\alpha v-{\mathcal{T}} \eta&=& f^2, \end{eqnarray}
(39)
\begin{equation} \label{sem3} i\lambda v -\psi &=&f^3, \end{eqnarray}
(40)
\begin{equation} \label{sem4} i\lambda \psi -\Delta v+\alpha u&=&f^4, \end{eqnarray}
(41)
\begin{equation} \label{sem5} i\lambda \eta -\varphi +\eta_s&=& f^5. \end{eqnarray}
(42)
In the next step we shall show three lemmas important to proof the main result.

Lemma 1. The solutions of the system (7)-(13), given by the Theorem \ref{teo2.1}, satisfies \begin{eqnarray*} \int_{\Omega}\int^{\infty}_0g(s)|\nabla \eta|^2\; ds \; dx\leq K |\lambda|^2 ||U||_{\mathcal{H}}||F||_{\mathcal{H}} \end{eqnarray*} where \(K\) is a positive constant and \(|\lambda|> 1\).

Proof. Multiplying the equality (39) by \(\overline{\varphi}\) and integrating by parts on \(\Omega\), we get

\begin{eqnarray}\label{lem1.1} i \lambda \int_{\Omega}|\varphi|^2\;dx+\underbrace{\beta_0\int_{\Omega}\nabla u\cdot \nabla \overline{\varphi}\;dx}_{:=I_1} +\underbrace{\alpha \int_{\Omega}v\overline{\varphi}\; dx}_{:=I_2}+\underbrace{\int_{\Omega}\int^{\infty}_0g(s)\; \nabla \eta(s)\cdot \nabla\overline{\varphi}\; ds\; dx}_{:=I_3} =\int_{\Omega}f^2\overline{\varphi}\; dx. \end{eqnarray}
(43)
Substituting \(\varphi\) given in (38) into \(I_1\) and \(I_2\), we have
\begin{eqnarray}\label{lem1.2} I_1=-i\lambda \beta_0 \int_{\Omega}|\nabla u|^2\; dx-\beta_0 \int_{\Omega}\nabla u\cdot \nabla \overline{f^1}\; dx \end{eqnarray}
(44)
and
\begin{eqnarray}\label{lem1.3} I_2&=&- i\lambda \alpha \int_{\Omega}|u|^2\; dx-\alpha \int_{\Omega}u\overline{f^1}\; dx. \end{eqnarray}
(45)
Now, substituting \(\varphi\) given in (42) into \(I_3\) and integrating by parts, we obtain
\begin{eqnarray}\label{lem1.4} I_3=-i\lambda \int_{\Omega}\int^{\infty}_0g(s)|\nabla \eta(s)|^2 \;ds \; dx-\int_{\Omega}\int^{\infty}_0g'(s)|\nabla \eta(s)|^2 \; ds \;dx -\int_{\Omega}\int^{\infty}_0g(s)\nabla \eta(s)\cdot \nabla \overline{f^5} \; ds\;dx \; dx. \end{eqnarray}
(46)
Substituting (44), (45) and (46) into (43), we get
\begin{align}\label{min1} &i \lambda \int_{\Omega}|\varphi|^2\;dx-i\lambda \beta_0 \int_{\Omega}|\nabla u|^2\; dx- i\lambda \alpha \int_{\Omega}|u|^2\; dx-i\lambda \int_{\Omega}\int^{\infty}_0g(s)|\nabla \eta(s)|^2 \;ds \; dx-\frac{1}{2}\int_{\Omega}\int^{\infty}_0g'(s)|\nabla \eta(s)|^2 \; ds \;dx\nonumber \\ &=\beta_0 \int_{\Omega}\nabla u\cdot \nabla \overline{f^1}\; dx+\alpha \int_{\Omega}u\overline{f^1}\; dx+\int_{\Omega}\int^{\infty}_0g(s)\nabla \eta(s)\cdot \nabla \overline{f^5} \; ds\;dx+\int_{\Omega}f^2\overline{\varphi}\; dx. \end{align}
(47)
Taking the real part on the left side of the above equality and using the hypotheses (14) on \(g\), our conclusion follows.

Lemma 2. For any \(\epsilon>0\), there exists a positive constant \(K_{\epsilon}\) such that \begin{eqnarray*} &&\beta_0\int_{\Omega}|\nabla u|^2\; dx+\int_{\Omega}|\nabla v|^2\;dx+\alpha \int_{\Omega}(u\overline{v}+v\overline{u})\;dx\\ &&\leq \int_{\Omega}|\varphi|^2\;dx+\int_{\Omega}|\psi|^2\;dx+\epsilon \int_{\Omega}|\nabla u|^2\;dx+K_{\epsilon} |\lambda|^2 ||U||_{\mathcal{H}}||F||_{\mathcal{H}}+K||U||_{\mathcal{H}}||F||_{\mathcal{H}} \end{eqnarray*} where \(K\) is a positive constant.

Proof. Multiplying the equalities (39) and (41) by \(\overline{u}\) and \(\overline{v}\), respectively, integrating by parts on \(\Omega\) and summing up the result, we get

\begin{eqnarray}\label{lem2.1} &&\underbrace{i\lambda \int_{\Omega}\varphi \overline{u}\;dx}_{:=I_4}+\beta_0\int_{\Omega}|\nabla u|^2\; dx+\alpha \int_{\Omega}v \overline{u}\; dx +\int_{\Omega}\int^{\infty}_0g(s)\; \nabla \eta(s)\cdot \nabla \overline{u}\;ds\;dx\nonumber \\ &&+\underbrace{i\lambda \int_{\Omega} \psi \overline{v}\; dx}_{:=I_5}+\int_{\Omega}|\nabla v|^2\; dx+\alpha \int_{\Omega} u\overline{v}\; dx=\int_{\Omega}f^2\overline{u}\; dx+\int_{\Omega}f^4\overline{v}\; dx. \end{eqnarray}
(48)
Substituting \(\overline{i\lambda u}\) given in (38) into \(I_4\) and \(\overline{i\lambda v}\) given in (40) into \(I_5\), we find
\begin{eqnarray}\label{new1} &&\beta_0\int_{\Omega}|\nabla u|^2\; dx+\int_{\Omega}|\nabla v|^2\; dx+\alpha \int_{\Omega}(u\overline{v}+v \overline{u})\; dx=\int_{\Omega}(|\varphi|^2+|\psi|^2)\;dx-\int^{\infty}_0g(s)\int_{\Omega}\nabla \eta\cdot \nabla \overline{u}\;dx\nonumber \\ &&+\int_{\Omega}\varphi \overline{f^1}\;dx+\int_{\Omega}f^2\overline{u}\;dx+\int_{\Omega}\psi\overline{f^3}\;dx+\int_{\Omega}f^4\overline{v}\;dx. \end{eqnarray}
(49)
Now, using Poincaré and Young inequalities, we have
\begin{eqnarray}\label{new2} &&\beta_0\int_{\Omega}|\nabla u|^2\; dx+\int_{\Omega}|\nabla v|^2\; dx+\alpha \int_{\Omega}(u\overline{v}+v \overline{u})\; dx\nonumber \\ &&\leq \int_{\Omega}(|\varphi|^2+|\psi|^2)\;dx+\epsilon \int_{\Omega}|\nabla u|^2\;dx +K_{\epsilon}||\eta||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))}+K ||U||_{\mathcal{H}}||F||_{\mathcal{H}}. \end{eqnarray}
(50)
Using the Lemma 1, our conclusion follows.

Lemma 3. Under the conditions of the previous lemma, we have \begin{eqnarray*} \frac{b_0}{2}\int_{\Omega}|\varphi|^2\; dx&\leq &\epsilon\int_{\Omega}(|\nabla u|^2+|\nabla v|^2)\;dx+ K_{\epsilon} |\lambda|^2 ||U||_{\mathcal{H}}||F||_{\mathcal{H}}+K ||U||_{\mathcal{H}}||F||_{\mathcal{H}} \end{eqnarray*} and \begin{eqnarray*} \left(\frac{1}{2}-\frac{K}{|\lambda|^2}\right)\int_{\Omega}|\psi|^2\;dx\leq K_{\epsilon} |\lambda|^2 ||U||_{\mathcal{H}}||F||_{\mathcal{H}}+K ||U||_{\mathcal{H}}||F||_{\mathcal{H}} \end{eqnarray*} with \(|\lambda|>1\) large enough.

Proof. Multiplying the equation (42) by \(\int^{\infty}_0g(s)\;ds\overline{\varphi}\) and integrating on \(\Omega\), we find \begin{eqnarray*} \underbrace{i\lambda \int^{\infty}_0g(s)\int_{\Omega}\eta(s)\overline{\varphi}\;dx\;ds}_{:=I_6}-b_0\int_{\Omega}|\varphi|^2\;dx +\int^{\infty}_0g(s)\int_{\Omega}\eta_s(s)\overline{\varphi}\;dx\;ds =\int^{\infty}_0g(s)\int^{\infty}_0f^5(s)\overline{\varphi}\;dx\;ds \end{eqnarray*} where \(b_0=\int^{\infty}_0g(s)\;ds\). On the other hand, noting that \[ \int^{\infty}_0g(s)\int_{\Omega}\eta_s(s)\overline{\varphi}\;dx\;ds=-\int^{\infty}_0g'(s)\int_{\Omega}\eta(s)\overline{\varphi}\;dx\;ds \] and substituting \(\overline{i\lambda \varphi}\) given in (39) into \(I_6\), we get

\begin{eqnarray}\label{new3} &&b_0\int_{\Omega}|\varphi|^2\;dx=-\beta_0\int^{\infty}_0g(s)\int_{\Omega}\eta(s)\Delta \overline{u}\;dx\;ds+\alpha \int^{\infty}_0g(s)\int_{\Omega} \eta(s)\overline{v}\;dx\;ds-\alpha \int_{\Omega}\left(\int^{\infty}_0g(s)\eta(s)\;ds\right)\nonumber \\ &&\times\left(\int^{\infty}_0g(s)\Delta \eta(s)\;ds\right)+\int^{\infty}_0g'(s)\int_{\Omega}\eta(s)\overline{\varphi}\;dx\;ds-\int^{\infty}_0g(s)\int_{\Omega}\eta(s)\overline{f^2}\;dx\;ds. \end{eqnarray}
(51)
Using the hypotheses on \(g\) given in (14) and taking into account the Poincaré and Young inequalities, we have \begin{eqnarray*} \frac{b_0}{2}\int_{\Omega}|\varphi|^2\;dx \leq \epsilon \int_{\Omega}(|\nabla u|^2+|\nabla v|^2)\;dx+K_{\epsilon}||\eta||^2_{L^2_g(\mathbb{R}^+;H^1_0(\Omega))}+K ||U||_{\mathcal{H}}||F||_{\mathcal{H}}. \end{eqnarray*} Using the Lemma 1, follows the first inequality. To show the second inequality, we substitute the equation (38) into (42). This gives,
\begin{eqnarray}\label{new4} i\lambda \eta - i\lambda u+\eta_s=f^5-f^1. \end{eqnarray}
(52)
Now, we substitute \(u\) given in (41) into (52). Then, we obtain,
\begin{eqnarray}\label{new5} i\lambda \alpha \eta- \lambda^2\psi-i\lambda \Delta v+\alpha \eta_s=\alpha (f^5-f^1)+i\lambda f^4. \end{eqnarray}
(53)
Multiplying the equation (53) by \(\overline{\int^{\infty}g(s)\psi}\), integrating by parts on \(\Omega\) and proceeding as to obtain the first estimate, we have \begin{eqnarray*} \frac{1}{2}\int_{\Omega}|\psi|^2\;dx\leq \frac{K}{|\lambda|^2}\int_{\Omega}|\psi|^2\;dx+K|\lambda|^2||U||_{\mathcal{H}}||F||_{\mathcal{H}}+K||F||^2_{\mathcal{H}}. \end{eqnarray*} From where follows the second inequality. The proof is now complete.

Now, we are in the position the main result of this paper.

Theorem 4. The semigroup associated to the system (7)-(13) is polynomially stable and \[ ||S(t)U_0||_{\mathcal{H}}\leq \frac{K}{\sqrt{t}}||U_0||_{D(\mathcal{A})}. \] Moreover, this result is optimal.

Proof. From Lemmas 1, 2 and 3, choosing \(\epsilon >0\) small enough and for \(|\lambda|> 1\) large enough, we have \[ ||U||^2_{\mathcal{H}}\leq K|\lambda|^2||U||_{\mathcal{H}} ||F||_{\mathcal{H}}+K ||F||^2_{\mathcal{H}}. \] From where it follows that \[ ||U||^2_{\mathcal{H}}\leq K|\lambda|^4||F||^2_{\mathcal{H}} \] that can be written as \begin{eqnarray*} ||(\lambda I-\mathcal{A})^{-1}||\leq K|\lambda|^2, \end{eqnarray*} that is

\begin{eqnarray} ||(\lambda I-\mathcal{A})^{-1}||= K\mathcal{O}(|\lambda|^2), \,\,\lambda \rightarrow \infty. \label{BTth} \end{eqnarray}
(54)
Then using the Theorem of A. Borichev and Y. Tomilov, (see [15], Theorem 2.4), the condition (54) is equivalent to \begin{eqnarray*} ||S(t){\mathcal{A}}^{-1}||=K\mathcal{O}(t^{-\frac{1}{2}})\Rightarrow ||S(t){\mathcal{A}}^{-1}F||_{\mathcal{H}}\leq \frac{K}{\sqrt{t}}||F||_{\mathcal{H}}, \,\,t \rightarrow \infty. \end{eqnarray*} Then taking \(\mathcal{A}U_0=F\), we get \[ ||S(t)U_0||_{\mathcal{H}}\leq \frac{K}{\sqrt{t}}||U_0||_{D(\mathcal{A})}. \] Therefore the solution decays polynomially. To prove that the rate of decay is optimal, we will argue by contradiction. Suppose that the rate \(t^{-\frac{1}{2}}\) can be improved; for example that the rate is \(t^{-\frac{1}{2-\epsilon}}\) for some \(0< \epsilon < 2\). From Theorem 5.3 in [20], the operator \[ |\lambda|^{\displaystyle{-2+ \frac{\epsilon}{2}}}||(\lambda I-{\mathcal{A}})^{-1}|| \] should be limited, but this does not happen. For this, let us suppose that there exist a sequence \((\lambda_{\mu})\subset {\mathbb{R}}\) with \(\lim_{\mu \rightarrow \infty}|\lambda_{\mu}|=\infty\) and \((U_{\mu})\subset D(\mathcal{A})\) for \((F_{\mu})\subset {\mathcal{H}}\) such that \[ (i\lambda_{\mu}I-{\mathcal{A}})U_{\mu}=F_{\mu} \] is bounded in \({\mathcal{H}}\) and \[ \lim_{\mu \rightarrow \infty}|\lambda_{\mu}|^{\displaystyle{-2+ \frac{\epsilon}{2}}}||U_{\mu}||_{\mathcal{H}}=\infty. \] So, following the same steps as in the proof of Theorem 3 we can conclude that \[ |\lambda_{\mu}|^{\displaystyle{-2+ \frac{\epsilon}{2}}}||U_{\mu}||_{\mathcal{H}}\geq \mathcal{O}\left(\mu^{\displaystyle{\frac{\epsilon}{2}}}\right)\rightarrow \infty,\quad \mbox{as}\quad \mu \rightarrow \infty. \] Therefore the rate cannot be improved. The proof is now complete.

Acknowledgments

This research is partially supported by PNPD/UFBA/CAPES(Brazil).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Raposo, C. A., Bastos, W. D., & Alves, B. F. (2010). Loss of exponential stability for a thermoelastic system with memory. Electronic Journal of Differential Equations, 15, 1--5.[Google Scholor]
  2. Cannon, J. R. (1963). The solution of heat equation subject to the specification of energy. Quarterly of Applied Mathematics, 21, 155--160.[Google Scholor]
  3. Bažant, Z. P., & Jirásek, M. (2002). Nonlocal integral formulation of plasticity and damage: Survey of progress. Journal of Engineering Mechanics, 128, 1119--1149.[Google Scholor]
  4. Komornik, V., & Bopeng, R. (1997). Boundary stabilization of compactly coupled wave equations. Asymptotic Analysis, 14, 339--359.[Google Scholor]
  5. Aassila, M. (1999). A note on the boundary stabilization of a compactly coupled system of wave equations. Applied Mathematics Letters, 12, 19--24.[Google Scholor]
  6. Aassila, M. (2001). Strong asymptotic stability of a compactly coupled system of wave equations, Applied Mathematics Letters, 14, 285--290.[Google Scholor]
  7. Bastos, W. D., Spezamiglio, A., & Raposo, C. A. (2011). On exact boundary controllability for linearly coupled wave equations. Journal of Mathematical Analysis and Applications, Article.ID 15692.[Google Scholor]
  8. Najafi, M. (2001). Study of exponential stability of coupled wave systems via distributed stabilizer. International Journal of Mathematics and Mathematical Sciences, 28, 479--491.[Google Scholor]
  9. Lobato, R. F. C., Cordeiro, S. M. C., Santos, M. L., & Almeida Junior, D. S. (2014). Optimal polynomial decay to coupled wave equations and its numerical properties.Journal of Applied Mathematics., Art. ID 897080.[Google Scholor]
  10. Boussoira, F. A. (1999). Stabilisation frontiére indirecte de systémes faiblement couplés. Comptes rendus de l'Académie des Sciences, 328, 1015--1020.[Google Scholor]
  11. Boussoira, F. A., Cannarsa, P., & Komornik, V. (2002). Indirect internal stabilization of weakly coupled evolution equations. Journal of Evolution Equations, 2, 127--150.[Google Scholor]
  12. Almeida, R. G. C., & Santos, M. L. (2011). Lack of exponential decay of a coupled system of wave equations with memory. Nonlinear Analysis: Real World Applications, 12, 1023--1032.[Google Scholor]
  13. Pazy, A. (1983). Semigroups of linear operators and applications to partial differential equations. Springer-Verlag, New York.[Google Scholor]
  14. Prüss, J. (1984). On the spectrum of C\(_0\)-semigroups. Transactions of the American Mathematical Society, 28, 847--857.[Google Scholor]
  15. Borichev, A., & Tomilov, Y. (2009). Optimal polynomial decay of functions and operator semigroups. Mathematische Annalen, 347, 455--478.[Google Scholor]
  16. Dafermos, C. M. (1970). Asymptotic stability in viscoelasticity. Archive for Rational Mechanics and Analysis, 37, 297--308.[Google Scholor]
  17. Fabrizio, M., & Morro, A. (1992). Mathematical problems in linear viscoelasticity. SIAM Studies in Applied Mathematics, Philadelphia.[Google Scholor]
  18. Rivera, J. E. M., & Naso, M. G. (2007). Asymptotic stability of semigroups associated with linear weak dissipative systems with memory. Journal of Mathematical Analysis and Applications, 326, 691--707.[Google Scholor]
  19. Brezis, H. (1992). Analyse Fonctionelle, Théorie et Applications. Masson, Paris.[Google Scholor]
  20. Fatori, L. H., & Rivera, J. E. M. (2010). Rates of decay to weak thermoelastic Bresse system. IMA Journal of Applied Mathematics, 75, 881--904.[Google Scholor]
]]>
Linear differential equations with fast growing coefficients in the unit disc https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/linear-differential-equations-with-fast-growing-coefficients-in-the-unit-disc/ Fri, 10 Apr 2020 17:45:16 +0000 https://old.pisrt.org/?p=4015
OMA-Vol. 4 (2020), Issue 1, pp. 38 - 48 Open Access Full-Text PDF
Benharrat Belaïdi, Mohamed Amine Zemirni
Abstract: In this article, we give new conditions on the fast growing analytic coefficients of linear complex differential equations to estimate the iterated \(p\)-order and iterated \(p\)-type of all solutions in the unit disc \(\mathbb{D}\), where \(p\in \mathbb{N}\backslash \{1\}\).
]]>

Open Journal of Mathematical Analysis

Linear differential equations with fast growing coefficients in the unit disc

Benharrat Belaïdi\(^1\), Mohamed Amine Zemirni
Department of Mathematics, Laboratory of Pure and Applied Mathematics, University of Mostaganem (UMAB), B. P. 227 Mostaganem, Algeria.(B.B & M.A.Z)
\(^1\)Corresponding Author: benharrat.belaidi@univ-mosta.dz

Abstract

In this article, we give new conditions on the fast growing analytic coefficients of linear complex differential equations to estimate the iterated \(p\)-order and iterated \(p\)-type of all solutions in the unit disc \(\mathbb{D}\), where \(p\in \mathbb{N}\backslash \{1\}\).

Keywords:

Complex differential equation, iterated \(p\)-order, iterated \(p\)-type.

1. Introduction

Consider the linear differential equation

\begin{equation} f^{(k)}+a_{k-1}(z)f^{(k-1)}+\cdots +a_{1}(z)f^{\prime }+a_{0}(z)f=0, \label{equ1} \end{equation}
(1)
where \(k\geq 1\) is an integer, \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) are analytic functions in the unit disc \(\mathbb{D}=\{z\in \mathbb{C}:|z|< 1\}\) and \(a_{0}(z)\not\equiv 0\). The theory of complex differential equations in the unit disc has been developed since 1980's, see [1]. In the year 2000, Heittokangas in [2] firstly investigated the growth and oscillation theory of Equation (1) when the coefficients \( a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) are analytic functions in the unit disc \(\mathbb{D}\) by introducing the definition of the function spaces. His results also gave some important tools for further investigations on the theory of meromorphic solutions of Equation (1).

In this article, we investigate the growth of solutions of the Equation (1) when the coefficients \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) are analytic in \(\mathbb{D}\), and we deal with the case that the coefficients are fast growing in \(\mathbb{D}\). To define the order of fast-growth of analytic functions, we define inductively for \(r\in \lbrack 0,+\infty ),\) \( \exp _{0}r=r\), \(\exp _{1}r=e^{r}\) and \(\exp _{n+1}r=\exp \left( \exp _{n}r\right) ,\) \(n\in \mathbb{N}\). For all \(r\) sufficiently large, we define \(\log _{0}r=r,\) \(\log _{1}r=\log r\) and \(\log _{n+1}r=\log \left( \log _{n}r\right) ,\) \(n\in \mathbb{N}\). Also, we need to be familiar with the fundamental results and the standard notations of the Nevanlinna's theory on the complex plane \(\mathbb{C}\) and in the unit disc \(\mathbb{D}\), for more details on Nevanlinna theory and its applications in complex differential equations in complex plane and in unit disc, we refer to [2,3, 4, 5, 6, 7].

Before stating our main results, we recall definitions and preliminary remarks concerning meromorphic and analytic functions in \(\mathbb{D}\). For the definitions and more discussions, we refer the reader to [7, 8, 9, 10].

Let \(p\geq 1\) be an integer and \(f\) be a meromorphic function in \(\mathbb{D}\) . Then, the iterated \(p\)-order of \(f\) is defined by \begin{equation*} \rho _{p}(f)=\limsup_{r\rightarrow 1^{-}}\frac{\log _{p}^{+}T(r,f)}{\log \frac{1}{1-r}}, \end{equation*} where \(\log _{1}^{+}r=\log ^{+}r=\max \{\log r;0\}\), \(\log _{p+1}^{+}r=\log ^{+}\left( \log _{p}^{+}r\right) \) and \(T(r,f)\) is the Nevanlinna characteristic function. If \(f\) is analytic in \(\mathbb{D}\), then the iterated \(p\)-order of \(f\) is defined by \begin{equation*} \rho _{M,p}(f)=\limsup_{r\rightarrow 1^{-}}\frac{\log _{p+1}^{+}M(r,f)}{\log \frac{1}{1-r}}, \end{equation*} where \(M(r,f)=\max \left\{ |f(z)|:|z|=r\right\} .\)

Remark 1. For \(p=1\), \(\rho _{1}(f)\) is called order, see [2]. And for \(p=2,\) \(\rho _{2}(f)\) is called hyper-order, see [11].

Remark 2. It follows by [7, page 205] that if \(f\) is an analytic function in \(\mathbb{D}\), then we have the inequalities \begin{equation*} \rho _{1}(f)\leq \rho _{M,1}(f)\leq \rho _{1}(f)+1 \end{equation*} which are the best possible in the sense that there are analytic functions \( g \) and \(h\) such that \(\rho _{1}(g)=\rho _{M,1}(g)\) and \(\rho _{M,1}(h)=\rho _{1}(h)+1\), see [12]. However, it follows by [4,Proposition 2.2.2] {laine} that \(\rho _{p}(f)=\rho _{M,p}(f)\) for \(p\geq 2\).

The iterated \(p\)-type of a meromorphic function \(f\) \ in \(\mathbb{D}\) with \( 0< \rho _{p}(f)< +\infty \) is defined by \begin{equation*} \tau _{p}(f)=\limsup_{r\rightarrow 1^{-}}\left( 1-r\right) ^{\rho _{p}(f)}\log _{p-1}^{+}T(r,f), \end{equation*} and if \(f\) is analytic in \(\mathbb{D}\) with \(0< \rho _{M,p}(f)< +\infty \), then the iterated \(p\)-type is defined by \begin{equation*} \tau _{M,p}(f)=\limsup_{r\rightarrow 1^{-}}\left( 1-r\right) ^{\rho _{M,p}(f)}\log _{p}^{+}M(r,f). \end{equation*}

Remark 3. It follows by [4,Proposition 2.2.2] that \(\tau _{p}(f)=\tau _{M,p}(f)\) for \(p\geq 2\).

2. Basic results

Heittokangas et al. in [10] proved the following results.

Theorem 1. ([10]) Let \(k\in \mathbb{N}\). If the coefficients \(a_{0}(z),a_{1}(z), \dots ,a_{k-1}(z)\) are analytic in \(\mathbb{D}\) such that \(\rho _{M,p}(a_{j})< \rho _{M,p}(a_{0})\) for all \(j=1,\dots ,k-1\), then all solutions \(f\not\equiv 0\) of \((1)\) satisfy \(\rho _{M,p+1}(f)=\rho _{M,p}(a_{0})\).

Theorem 2.([10]) Let \(k\in \mathbb{N}\). If the coefficients \(a_{0}(z),a_{1}(z), \dots ,a_{k-1}(z)\) are analytic in \(\mathbb{D}\) such that \(\rho _{M,p}(a_{j})\leq \rho _{M,p}(a_{0})\) for all \(j=1,\dots ,k-1\) and \begin{equation*} \sum_{\rho _{M,p}(a_{j})=\rho _{M,p}(a_{0})}\tau _{M,p}(a_{j})< \tau _{M,p}(a_{0}), \end{equation*} then all solutions \(f\not\equiv 0\) of \((1)\) satisfy \(\rho _{M,p+1}(f)=\rho _{M,p}(a_{0})\).

Hamouda in [13], gave an improvement of Theorem 2 as follows.

Theorem 3.([13]) Let \(k\in \mathbb{N}\). If the coefficients \(a_{0}(z),a_{1}(z), \dots ,a_{k-1}(z)\) are analytic in \(\mathbb{D}\) such that \(\rho _{M,p}(a_{j})\leq \rho _{M,p}(a_{0})\) for all \(j=1,\dots ,k-1\) and \begin{equation*} \max \left\{ \tau _{M,p}(a_{j}):\rho _{M,p}(a_{j})=\rho _{M,p}(a_{0})\right\} < \tau _{M,p}(a_{0}), \end{equation*} then all solutions \(f\not\equiv 0\) of \((1)\) satisfy \(\rho _{M,p+1}(f)=\rho _{M,p}(a_{0})\).

Our proofs depend mainly upon the following lemmas. Before starting these lemmas, we recall the concept of logarithmic measure. The logarithmic measure of a set \(S\subset (0,1)\) is given by \begin{equation*} lm(S):=\int_{S}\frac{dt}{1-t}. \end{equation*} The set \(F\subset \lbrack 0,1)\) in all this paper is not necessarily the same at each occurrence, but it is always of finite logarithmic measure, that is \(lm(F)< +\infty \). To avoid some problems of the exceptional sets, we need the following lemma.

Lemma 1. ([2, 14) Let \(g:[0,1)\mapsto \mathbb{R}\) and \(h:[0,1)\mapsto \mathbb{R}\) be monotone non-decreasing functions such that \(g(r)\leq h(r)\) holds outside of an exceptional set \(F\subset \lbrack 0,1)\) of finite logarithmic measure. Then there exists a \(d\in (0,1)\) such that if \(s(r)=1-d(1-r)\), then \( g(r)\leq h(s(r))\) for all \(r\in \lbrack 0,1)\).

Lemma 2.[12, Theorem 3.1] Let \(k\) and \(j\) be integers satisfying \(k>j\geq 0\), and let \( \varepsilon >0\) and \(d\in (0,1)\). If \(f\) is a meromorphic in \(\mathbb{D}\) such that \(f^{(j)}\not\equiv 0\), then \begin{equation*} \left\vert \frac{f^{(k)}(z)}{f^{(j)}(z)}\right\vert \leq \left[ \left( \frac{ 1}{1-|z|}\right) ^{2+\varepsilon }\max \left\{ \log \frac{1}{1-|z|} ;T(s(|z|),f)\right\} \right] ^{k-j} \end{equation*} for \(|z|\not\in F,\) where \(F\subset \lbrack 0,1)\) is a set of finite logarithmic measure, and where \(s(|z|)=1-d(1-r)\).

Lemma 3. ([10]) \label{lem2}Let \(f\) be a meromorphic function in the unit disc with \(\rho _{p}(f):=\rho < +\infty \) for some \(p\in \mathbb{N}\), and let \(\varepsilon >0\) be a given constant. Then, there exists a set \(F\subset (0,1)\) of finite logarithmic measure such that for all \(z\) with \(|z|=r\not\in F\) and for all integer \(j\geq 1\), we have:

  • (i) If \(p=1\), then
    \begin{equation} \left\vert \frac{f^{(j)}(z)}{f(z)}\right\vert \leq \frac{1}{(1-r)^{j(\rho +2+\varepsilon )}}. \label{conlem3} \end{equation}
    (2)
  • (ii) If \(p\geq 2\), then
    \begin{equation} \left\vert \frac{f^{(j)}(z)}{f(z)}\right\vert \leq \exp _{p-1}\left\{ \frac{1 }{(1-r)^{\rho +\varepsilon }}\right\} . \label{conlem4} \end{equation}
    (3)

Lemma 4. ([10]) Let \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) be analytic functions in the unit disc \(\mathbb{D}\). Then, every solution \(f\not\equiv 0\) of the Equation \((1)\) satisfies \begin{equation*} \rho _{p+1}(f)=\rho _{M,p+1}(f)\leq \max \left\{ \rho _{M,p}(a_{j}):j=0,\dots ,k-1\right\} . \end{equation*}

Remark 4. If \(p\geq 2\), then by Remark 2 and Lemma 4, we obtain that very solution \(f\not\equiv 0\) of the Equation \((1)\) satisfies \begin{equation*} \rho _{p+1}(f)\leq \max \left\{ \rho _{p}(a_{j}):j=0,\dots ,k-1\right\} . \end{equation*}

Lemma 5. ([2, 3, 7]) Let \(f\) be a meromorphic function in the unit disc \(\mathbb{D}\) and let \(k\in \mathbb{N}\). Then \begin{equation*} m\left( r,\frac{f^{(k)}}{f}\right) =S(r,f), \end{equation*} where \(S(r,f)=O\left( \log ^{+}T(r,f)+\log \left( \frac{1}{1-r}\right) \right) \), possibly outside a set \(F\subset \lbrack 0,1)\) with finite logarithmic measure.

Lemma 6. ([15]) Let \(f\) be a meromorphic function in the unit disc \(\mathbb{D}\) for which \(i\left( f\right) =p\geq 1\) and \(\rho _{p}\left( f\right) =\rho < +\infty \), and let \(k\geq 1\) be an integer. Then for any \(\varepsilon >0,\) \begin{equation*} m\left( r,\frac{f^{\left( k\right) }}{f}\right) =O\left( \exp _{p-2}\left\{ \frac{1}{1-r}\right\} ^{\rho +\varepsilon }\right) \end{equation*} holds for all \(r\) outside a set \(F\) \(\subset \lbrack 0,1)\) with \(\int_{F}\frac{dr}{1-r}< +\infty .\)

Lemma 7. For an integer \(p\geq 2\), let \(f\) be a meromorphic function in \( \mathbb{D}\) such that \(0< \rho _{p}(f)=\rho < +\infty \) (see Definition 7), \(0< \tau _{p}(f)=\tau < +\infty \) and \(0< \tau _{p}^{\ast }(f)=\tau ^{\ast }< +\infty \). Then for any given \(\eta < \tau ^{\ast }\), there exists a subset \(E\subset \lbrack 0,1)\) that has an infinite logarithmic measure \(\int_{E}\frac{dr}{1-r}=+\infty \) such that for all \(r\in E,\) we have \begin{equation*} \log _{p-2}T(r,f)>\eta \exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} . \end{equation*}

Proof. By the definition of \(\tau _{p}^{\ast }(f)\), there exists an increasing sequence \(\{r_{m}\}_{m=1}^{+\infty }\subset \lbrack 0,1)\) satisfying \(\frac{1 }{m}+\left( 1-\frac{1}{m}\right) r_{m}< r_{m+1},\;(r_{m}\longrightarrow 1^{-}, \) \(m\longrightarrow +\infty )\) and \begin{equation*} \lim_{m\rightarrow +\infty }\frac{\log _{p-2}T(r_{m},f)}{\exp \left\{ \frac{ \tau }{\left( 1-r_{m}\right) ^{\rho }}\right\} }=\tau ^{\ast }. \end{equation*} Then, for any given \(0< \varepsilon < \tau ^{\ast }\), there exists a positive integer \(m_{0}\) such that for all \(m\geq m_{0}\), we have

\begin{equation} \log _{p-2}T(r_{m},f)>(\tau ^{\ast }-\varepsilon )\exp \left\{ \frac{\tau }{ \left( 1-r_{m}\right) ^{\rho }}\right\} . \label{for1} \end{equation}
(4)
For \(r\in \left[ r_{m},\frac{1}{m}+\left( 1-\frac{1}{m}\right) r_{m}\right] \), we get \begin{equation*} \lim_{m\rightarrow +\infty }\frac{\exp \left\{ \tau \left[ \left( 1-\frac{1}{ m}\right) \left( \frac{1}{1-r}\right) \right] ^{\rho }\right\} }{\exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} }=1. \end{equation*} Then for any given \(0< \eta < \tau ^{\ast }-\varepsilon \), there exists a positive integer \(m_{1}\) such that for all \(m\geq m_{1}\), and for all \(r\in \left[ r_{m},\frac{1}{m}+\left( 1-\frac{1}{m}\right) r_{m}\right] \), we have
\begin{equation} \frac{\exp \left\{ \tau \left[ \left( 1-\frac{1}{m}\right) \left( \frac{1}{ 1-r}\right) \right] ^{\rho }\right\} }{\exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} }>\frac{\eta }{\tau ^{\ast }-\varepsilon }. \label{for2} \end{equation}
(5)
By (4) and (5), for all \(m\geq m_{2}=\max \left\{ { m_{0};m_{1}}\right\} \) and for all \(r\in \left[ r_{m},\frac{1}{m}+\left( 1- \frac{1}{m}\right) r_{m}\right] \), we have \begin{eqnarray*} \log _{p-2}T(r,f)&\geq& \log _{p-2}^{+}T(r_{m},f)>(\tau ^{\ast }-\varepsilon )\exp \left\{ \frac{\tau }{\left( 1-r_{m}\right) ^{\rho }}\right\}\\ &\geq& (\tau ^{\ast }-\varepsilon )\exp \left\{ \tau \left[ \left( 1-\frac{1}{m }\right) \left( \frac{1}{1-r}\right) \right] ^{\rho }\right\} >\eta \exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} . \end{eqnarray*} Set \begin{equation*} E=\bigcup_{m=m_{2}}^{+\infty }\left[ r_{m},\frac{1}{m}+\left( 1-\frac{1}{m} \right) r_{m}\right] . \end{equation*} Then \begin{equation*} \int_{E}\frac{dt}{1-t}=\sum_{m=m_{2}}^{+\infty }\int_{r_{m}}^{\frac{1}{m} +\left( 1-\frac{1}{m}\right) r_{m}}\frac{dt}{1-t}=\sum_{m=m_{2}}^{+\infty }\log \frac{m}{m-1}=+\infty . \end{equation*}

By using similar reasoning as in the proof of Lemma 7, we easily obtain the following lemma.

Lemma 8. For an integer \(p\geq 2\), let \(f\) be a meromorphic function in \(\mathbb{D}\) such that \(0< \rho _{M,p}(f)=\rho < +\infty \), \(0< \tau _{M,p}(f)=\tau < +\infty \) and \(0< \tau _{M,p}^{\ast }(f)=\tau ^{\ast }< +\infty \). Then for any given \(\eta < \tau ^{\ast }\), there exists a subset \(E\subset \lbrack 0,1)\) that has an infinite logarithmic measure \(\int_{E} \frac{dr}{1-r}=+\infty \) such that for all \(r\in E,\) we have \begin{equation*} \log _{p-1}M(r,f)>\eta \exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} . \end{equation*}

Lemma 9. ([16]) Let \(f\) be a solution of Equation \(\left( 1\right) ,\)where the coefficients \(a_{j}\left( z\right) \) \(\left( j=0,...,k-1\right) \) are analytic functions in the disc \(\Delta _{R}=\left\{ z\in\mathbb{C}:\left\vert z\right\vert < R\right\} ,\) \(0< R\leq \infty .\) Let \(n_{c}\in \left\{ 1,...,k\right\} \) be the number of nonzero coefficients \(a_{j}\left( z\right) \) \(\left( j=0,...,k-1\right) ,\) and let \(\theta \in \left[ 0,2\pi \right] \) and \( \varepsilon >0.\) If \(z_{\theta }=\nu e^{i\theta }\in \Delta _{R}\) is such that \(a_{j}\left( z_{\theta }\right) \neq 0\) for some \(j=0,...,k-1,\) then for all \(\nu < r< R,\) \begin{equation*} \left\vert f\left( re^{i\theta }\right) \right\vert \leq C\exp \left( n_{c} \overset{r}{\underset{\nu }{\int }}\underset{j=0,...,k-1}{\max }\left\vert a_{j}\left( te^{i\theta }\right) \right\vert ^{\frac{1}{k-j}}dt\right) , \end{equation*} where \(C>0\) is a constant satisfying \begin{equation*} C\leq \left( 1+\varepsilon \right) \underset{j=0,...,k-1}{\max }\left( \frac{ \left\vert f^{\left( j\right) }\left( z_{\theta }\right) \right\vert }{ \left( n_{c}\right) ^{j}\underset{n=0,...,k-1}{\max }\left\vert a_{n}\left( z_{\theta }\right) \right\vert ^{\frac{j}{k-n}}}\right) . \end{equation*}

Lemma 10. Let \(\{a_{j}(z)\}_{0\leq j\leq k-1}\) be analytic functions in the disc \(\mathbb{D}\) such that \(0< p< \infty \) and \(0< \max \{\rho _{M,p}(a_{j}):j=1,\dots ,k-1\}\leq \rho _{M,p}\left( a_{0}\right) =\rho < \infty \) and \(\max \{\tau _{M,p}(a_{j}):j=1,\dots ,k-1\}\leq \tau _{M,p}\left( a_{0}\right) =\tau < \infty \). Then, every solution \(f\not\equiv 0\) \textit{\ }of the Equation \(\left( 1\right) \) with \(\rho _{p+1}(f)=\rho \) satisfies \(\tau _{p+1}(f)\leq \tau \).

Proof. Let \(f\not\equiv 0\) be a solution of \(\left( 1\right) \) with \(\rho _{p+1}(f)=\rho .\) Let \(\theta _{0}\in \left[ 0,2\pi \right) \) be such that \( \left\vert f\left( re^{i\theta _{0}}\right) \right\vert =M\left( r,f\right) . \) By Lemma 9, we have

\begin{eqnarray} M\left( r,f\right) &\leq& C\exp \left( n_{c}\overset{r}{\underset{\nu }{\int }} \underset{j=0,...,k-1}{\max }\left\vert a_{j}\left( te^{i\theta }\right) \right\vert ^{\frac{1}{k-j}}dt\right)\nonumber\\ &\leq& C\exp \left( n_{c}\overset{r}{\underset{\nu }{\int }}\underset{ j=0,...,k-1}{\max }\left( M\left( r,a_{j}\right) \right) ^{\frac{1}{k-j} }dt\right)\nonumber\\ &\leq& C\exp \left( n_{c}\left( r-\nu \right) \underset{j=0,...,k-1}{\max } \left\{ M\left( r,a_{j}\right) \right\} \right) . \label{eq3.10Hu} \end{eqnarray}
(6)
We have \(\max \left\{ \rho _{M,p}(a_{j}):j=0,1,...,k-1\right\} =\rho _{M,p}\left( a_{0}\right) =\rho \). By the definition of \(\tau _{M,p}(a_{j})\) , for any given \(\varepsilon >0\) and \(r\rightarrow 1^{-},\) we obtain
\begin{equation} M(r,a_{j})\leq \exp _{p}\left\{ \frac{\tau _{M,p}(a_{j})+\frac{\varepsilon }{ 2}}{\left( 1-r\right) ^{\rho _{M,p}(a_{j})}}\right\} \leq \exp _{p}\left\{ \frac{\tau +\frac{\varepsilon }{2}}{\left( 1-r\right) ^{\rho }}\right\} \text{ }\left( j=0,1,...,k-1\right) . \label{eq3.12Hu} \end{equation}
(7)
By (6) and (7), we have for \(r\rightarrow 1^{-}\)
\begin{equation} M\left( r,f\right) \leq \exp _{p+1}\left\{ \frac{\tau +\varepsilon }{\left( 1-r\right) ^{\rho }}\right\} . \label{eq3.14Hu} \end{equation}
(8)
Then, it follows from (8), arbitrariness of \(\varepsilon >0\) that \(\tau _{p+1}(f)=\tau _{M,p+1}(f)\leq \tau .\)

3. Main results

In this article, we aim to answer the following questions:
  1. What can be said about the growth of solutions of the Equation \((1)\) in the case when \(\rho _{M,p}(a_{j})\leq \rho _{M,p}(a_{0})\) for all \(j=1,\dots ,k-1\) and \begin{equation*} \max \left\{ \tau _{M,p}(a_{j}):\rho _{M,p}(a_{j})=\rho _{M,p}(a_{0})\right\} \leq \tau _{M,p}(a_{0})? \end{equation*}
  2. What happened when we replace \(\rho _{M,p}\) and \(\tau _{M,p}\) by \(\rho _{p}\) and \(\tau _{p}\)?
As the first result, we give an improvement to Theorems 1 and 2 of [13].

Theorem 4. Let \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) be meromorphic functions in the unit disc \(\mathbb{D}\), and \(a_{0}(z)\not\equiv 0\). Suppose that there exist a point \(\omega \in \partial \mathbb{D}\), a curve \(\gamma \) tending to \(\omega \) and a set \(F_{1}\subset (0,1)\) of finite logarithmic measure such that for \(z\in \gamma \) and \(|z|=r\not\in F_{1},\) we have for the largest integer \(p\geq 1\)

\begin{equation} \lim_{z\rightarrow \omega }\frac{\displaystyle{\sum_{j=1}^{k-1}\left\vert a_{j}(z)\right\vert }+1}{\left\vert a_{0}(z)\right\vert }\exp _{p}\left\{ \frac{\lambda }{(1-r)^{\alpha }}\right\} =0 \label{conthm1} \end{equation}
(9)
for all \(\lambda >0\) and \(\alpha >0\). Then, every nontrivial meromorphic solution \(f\) of the Equation \((1)\) satisfies \(\rho _{p+1}(f)=+\infty \).

Proof. Suppose that \(f\not\equiv 0\) is a solution of the Equation (1) with \(\rho _{p+1}(f)=\rho < +\infty \). By (1), \(f\) satisfies

\begin{equation} 1\leq \frac{1}{|a_{0}(z)|}\left\vert \frac{f^{(k)}}{f}\right\vert +\sum_{j=1}^{k-1}\frac{|a_{j}(z)|}{|a_{0}(z)|}\left\vert \frac{f^{(j)}}{f} \right\vert . \label{11} \end{equation}
(10)
For \(p\geq 1\), by (3), for all \(z\) satisfying \(|z|=r\not\in F\) (\( F\) has finite logarithmic measure), we obtain
\begin{equation} \left\vert \frac{f^{(j)}(z)}{f(z)}\right\vert \leq \exp _{p}\left\{ \frac{1}{ (1-|z|)^{\alpha }}\right\} , \label{12} \end{equation}
(11)
where \(\alpha >0\) is a constant which depends on \(\rho \), \(\varepsilon \) and \(j=1,\dots ,k\). By substituting (10) into (11), it yields
\begin{equation} 1\leq \frac{\displaystyle{\sum_{j=1}^{k-1}\left\vert a_{j}(z)\right\vert }+1 }{\left\vert a_{0}(z)\right\vert }\exp _{p}\left\{ \frac{1}{(1-|z|)^{\alpha } }\right\} . \label{13} \end{equation}
(12)
By (9), for \(z\in \gamma \) such that \(|z|=r\not\in F_{1}\) (\( F_{1} \) has finite logarithmic measure), we know that as \(z\rightarrow \omega \)
\begin{equation} \frac{\displaystyle{\sum_{j=1}^{k-1}\left\vert a_{j}(z)\right\vert }+1}{ \left\vert a_{0}(z)\right\vert }\exp _{p}\left\{ \frac{1}{(1-|z|)^{\alpha }} \right\} \longrightarrow 0. \label{14} \end{equation}
(13)
Thus, for all \(z\in \gamma \) with \(|z|=r\not\in F_{1}\cup F\), by (12) and (13), we get a contradiction. Hence, every meromorphic solution \( f\not\equiv 0\) of (1) has an infinite \((p+1)\)-order.

Remark 5. In [13], under the same hypotheses of Theorem 4, Hamouda obtained that \(\rho _{p+1}(f)\geq \alpha .\)

In all the next, we consider \(p\in \mathbb{N}\backslash \{1\}\). In trying to give an answer on the above questions, we prove the following results.

Theorem 5. Let \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) be analytic functions in the unit disc \(\mathbb{D}\) satisfying \(\rho _{M,p}(a_{j})\leq \rho _{M,p}(a_{0})=\rho \) \((0< \rho < +\infty )\) and \(\tau _{M,p}(a_{j})\leq \tau _{M,p}(a_{0})=\tau \) \((0< \tau < +\infty )\) for all \(j=1,\dots ,k-1\). Suppose that there exist two positive real numbers \(\alpha \) and \(\beta \) with \( 0\leq \beta < \alpha \), such that

\begin{equation} \left\vert a_{0}(z)\right\vert \geq \exp _{p-1}\left\{ \alpha \exp \frac{ \tau }{\left( 1-r\right) ^{\rho }}\right\} \label{con1} \end{equation}
(14)
and
\begin{equation} \left\vert a_{j}(z)\right\vert \leq \exp _{p-1}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} ,\text{ }j=1,\dots ,k-1 \label{con2} \end{equation}
(15)
as \(|z|=r\rightarrow 1^{-}\) for \(r\in E_{1}\) (\(E_{1}\) is of infinite logarithmic measure). Then, every solution \(f\not\equiv 0\) of the Equation \( \left( 1\right) \) satisfies \(\rho _{p}\left( f\right) =+\infty ,\) \( \rho _{p+1}\left( f\right) =\rho \) and \(d^{\rho }\tau \leq \tau _{p+1}(f)\leq \tau ,\) \(d\in \left( 0,1\right) \).

Proof. Let \(f\not\equiv 0\) be a solution of the Equation (1). By (1), \(f\) satisfies

\begin{equation} \left\vert a_{0}(z)\right\vert \leq \left\vert \frac{f^{(k)}}{f}\right\vert +\sum_{j=1}^{k-1}|a_{j}(z)|\left\vert \frac{f^{(j)}}{f}\right\vert . \label{pr1} \end{equation}
(16)
By hypotheses of Theorem 5 and Lemma 4, we know that \( \rho _{p+1}(f)\leq \rho \). Suppose that \(\rho _{p+1}(f)=\rho _{1}< \rho \). Then, by (3) for all \(0< \varepsilon < \rho -\rho _{1}\), we have
\begin{equation} \left\vert \frac{f^{(j)}(z)}{f(z)}\right\vert \leq \exp _{p}\left\{ \frac{1}{ (1-r)^{\rho _{1}+\varepsilon }}\right\} ,\;j=1,\dots ,k, \label{pr2} \end{equation}
(17)
where \(|z|=r\not\in F\). By substituting (14), (15) and (17) into (16) we obtain
\begin{equation} \exp _{p-1}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} \leq k\exp _{p-1}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} \exp _{p}\left\{ \frac{1}{(1-r)^{\rho _{1}+\varepsilon }} \right\} , \label{pr3} \end{equation}
(18)
for all \(r\in E_{1}\backslash F\). Hence, we get
\begin{equation} (\alpha -\beta )\exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} \leq \exp \left\{ \frac{1}{(1-r)^{\rho _{1}+\varepsilon }}\right\} +C_{1} \label{pr4} \end{equation}
(19)
for some constant \(C_{1}>0,\) which is a contradiction as \(|z|=r\rightarrow 1^{-}\), \(r\in E_{1}\backslash F\), since \(\alpha >\beta \geq 0\) and \(\rho >\rho _{1}+\varepsilon \). Thus, \(\rho _{p+1}(f)=\rho \). Now, by Lemma 2, we have for \(j=1,\dots ,k\)
\begin{equation} \left\vert \frac{f^{(j)}(z)}{f(z)}\right\vert \leq \left[ \left( \frac{1}{ 1-|z|}\right) ^{2+\varepsilon }\max \left\{ \log \frac{1}{1-|z|} ;T(s(|z|),f)\right\} \right] ^{k} \label{pr5} \end{equation}
(20)
for all \(r\not\in F_{2}\cup \lbrack 0,1]\). From, (14), (15), (16) and (20), we obtain
\begin{equation} \exp _{p-1}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} \leq k\exp _{p-1}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} \left( \frac{1}{1-r}\right) ^{\left( 2+\varepsilon \right) k}T^{k}(s(r),f) \label{pr6} \end{equation}
(21)
for all \(r\in E_{1}\backslash (F_{2}\cup \lbrack 0,1])\). Hence
\begin{equation} \log (\alpha -\beta )+\frac{\tau }{\left( 1-r\right) ^{\rho }}\leq \log _{p} \frac{1}{1-r}+\log _{p}T(s(r),f)+C_{2} \label{pr7} \end{equation}
(22)
for some constant \(C_{2}>0\) and for all \(r\in E_{1}\backslash (F_{2}\cup \lbrack 0,1])\). Setting \(R=s(r)=1-d(1-r)\), \(d\in (0,1)\). We have \(1-r=\frac{ 1-R}{d},\) \(d\in (0,1)\). Then by Lemma 1, we obtain we obtain for \( R\longrightarrow 1^{-}\)
\begin{equation} \log (\alpha -\beta )+\frac{d^{\rho }\tau }{\left( 1-R\right) ^{\rho }}\leq \log _{p}\frac{d}{1-R}+\log _{p}T(R,f)+C_{2}. \label{pr8} \end{equation}
(23)
Since \(0< \rho _{p+1}(f)=\rho < \infty ,\) from (23) we deduce that \begin{equation*} \tau _{p+1}(f)=\underset{R\rightarrow 1^{-}}{\lim \sup }\frac{\log _{p}^{+}T(R,f)}{\frac{1}{\left( 1-R\right) ^{\rho }}}\geq d^{\rho }\tau . \end{equation*} By Lemma 10, we conclude that \(d^{\rho }\tau \leq \tau _{p+1}(f)\leq \tau .\)

Theorem 6. Let \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) be analytic functions in the unit disc \(\mathbb{D}\) satisfying \(\rho _{p}(a_{j})\leq \rho _{p}(a_{0})=\rho \) \((0< \rho < +\infty )\) and \(\tau _{p}(a_{j})\leq \tau _{p}(a_{0})=\tau \) \((0< \tau < +\infty )\) for all \(j=1,\dots ,k-1\). Suppose that there exist two positive real numbers \(\alpha \) and \(\beta \) with \( 0\leq \beta < \alpha \), such that

\begin{equation} m(r,a_{0})\geq \exp _{p-2}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} \label{con3} \end{equation}
(24)
and
\begin{equation} m(r,a_{j})\leq \exp _{p-2}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} ,\text{ }j=1,\dots ,k-1 \label{con4} \end{equation}
(25)
as \(|z|=r\rightarrow 1^{-}\) for \(r\in E_{2}\) (\(E_{2}\) is of infinite logarithmic measure). Then, every solution \(f\not\equiv 0\) of the Equation \( \left( 1\right) \) satisfies \(\ \rho _{p}\left( f\right) =+\infty ,\) \(\rho _{p+1}\left( f\right) =\rho \) and \(d^{\rho }\tau \leq \tau _{p+1}(f)\leq \tau ,\) \(d\in \left( 0,1\right) \).

Proof. Let \(f\not\equiv 0\) be a solution of the Equation (1). By (1) we can write

\begin{equation} a_{0}(z)=-\left( \frac{f^{(k)}}{f}+\sum_{j=1}^{k-1}a_{j}(z)\frac{f^{(j)}}{f} \right) . \label{21} \end{equation}
(26)
By hypotheses of Theorem 6 and Lemma 4, we know that \( \rho _{p+1}(f)\leq \rho \). Suppose that \(\rho _{p+1}(f)=\rho _{1}< \rho \). Then by Lemma 6, for all \(0< \varepsilon < \rho -\rho _{1}\) and for all \(|z|=r\notin F\), we have for \(j=1,\dots ,k\)
\begin{equation} m\left( r,\frac{f^{(j)}}{f}\right) =O\left( \exp _{p-1}\left\{ \frac{1}{ (1-r)^{\rho _{1}+\varepsilon }}\right\} \right) . \label{22} \end{equation}
(27)
Now, let \(p\geq 2\), it follows by (24), (25), (26) and (27) that
\begin{eqnarray} \exp _{p-2}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} &\leq& m(r,a_{0})\notag\\&\leq& \sum_{j=1}^{k-1}m(r,a_{j})+\sum_{j=1}^{k-1}m\left( r,\frac{f^{(j)}}{f} \right) +O\left( 1\right)\notag\\ &\leq& (k-1)\exp _{p-2}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} +M\exp _{p-1}\left\{ \frac{1}{(1-r)^{\rho _{1}+\varepsilon }}\right\} \label{24} \end{eqnarray}
(28)
holds for all \(z\) satisfying \(|z|=r\in E_{2}\backslash F\) as \(r\rightarrow +\infty ,\) and \(M>0\) is some constant. Hence, from (\ref{24}) we obtain \begin{equation*} (\alpha -\beta )\exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} \leq \exp \left\{ \frac{1}{(1-r)^{\rho _{1}+\varepsilon }}\right\} +C_{3} \end{equation*} for some constant \(C_{3}>0,\) which is a contradiction as \(|z|=r\rightarrow 1^{-}\), \(r\in E_{2}\backslash F\), since \(\alpha >\beta \geq 0\) and \(\rho >\rho _{1}+\varepsilon \). Thus, \(\rho _{p+1}(f)=\rho \). Now, it follows by (24), (25), (26) and Lemma 5 that
\begin{eqnarray} \exp _{p-2}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }} \right\} &\leq& m(r,a_{0})\notag\\ &\leq& \sum_{j=1}^{k-1}m(r,a_{j})+\sum_{j=1}^{k-1}m\left( r,\frac{f^{(j)}}{f} \right) +O\left( 1\right)\notag\\ &\leq& (k-1)\exp _{p-2}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} +O\left( \log ^{+}T(r,f)+\log \left( \frac{1}{1-r}\right) \right) \label{25} \end{eqnarray}
(29)
for all sufficiently large \(|z|=r\in E_{2}\backslash F\). Then, for all sufficiently large \(|z|=r\in E_{2}\backslash F\)
\begin{equation} \log (\alpha -\beta )+\frac{\tau }{\left( 1-r\right) ^{\rho }}\leq \log _{p}^{+}T(r,f)+\log _{p}\left( \frac{1}{1-r}\right) +C_{4} \label{26} \end{equation}
(30)
for some constant \(C_{4}>0.\) Then by Lemma 1, we obtain for \( |z|=r\in E_{2},\) \(s\left( r\right) \rightarrow 1^{-}\)
\begin{equation} \log (\alpha -\beta )+\frac{\tau }{\left( 1-r\right) ^{\rho }}\leq \log _{p}^{+}T(s\left( r\right) ,f)+\log _{p}\left( \frac{1}{1-s\left( r\right) } \right) +C_{4}, \label{27} \end{equation}
(31)
where \(s(r)=1-d(1-r)\), \(d\in (0,1)\). Hence, by (31) we obtain \begin{equation*} \tau _{p+1}(f)=\underset{s\left( r\right) \rightarrow 1^{-}}{\lim \sup } \frac{\log _{p}^{+}T(s\left( r\right) ,f)}{\frac{1}{\left( 1-s\left( r\right) \right) ^{\rho }}}\geq d^{\rho }\tau . \end{equation*} By Lemma 10, we conclude that \(d^{\rho }\tau \leq \tau _{p+1}(f)\leq \tau .\)

Hamouda in [17], to study the growth of meromorphic solutions of differential equations with finite iterated \(p\)-order in complex plane, introduced new type of growth (see[17, p. 46]). According to the definition of this new type of growth, we introduce a new definition of type of growth that we note \(\tau _{p}^{\ast }(f)\) related to iterated \(p\)-order of meromorphic function \(f\) in the unit disc, as follows.

Definition 7. For \(p\geq 2,\) let \(f\) be a meromorphic function of finite iterated \(p\) -order in \(\mathbb{D}\) such that \(0< \rho _{p}\left( f\right) =\rho < +\infty \) and \(0< \tau _{p}\left( f\right) =\tau < +\infty \), we define \(\tau _{p}^{\ast }(f)\) by \begin{equation*} \tau _{p}^{\ast }(f)=\limsup_{r\rightarrow 1^{-}}\frac{\log _{p-2}^{+}T(r,f) }{\exp \left\{ \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} }. \end{equation*} If \(f\)\ is an analytic function in \(\mathbb{D}\) with \(0< \tau _{M,p}\left( f\right) =\tau _{M}< +\infty \), we also define \begin{equation*} \tau _{M,p}^{\ast }(f)=\limsup_{r\rightarrow 1^{-}}\frac{\log _{p-1}^{+}M(r,f)}{\exp \left\{ \frac{\tau _{M}}{\left( 1-r\right) ^{\rho }} \right\} }. \end{equation*}

The following theorems improves and extends Theorems 2 and 3.

Theorem 8. Let \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) be analytic functions in the unit disc \(\mathbb{D}\) satisfying \(\rho _{M,p}(a_{j})\leq \rho _{M,p}(a_{0})=\rho \) \((0< \rho < +\infty )\) and \(\tau _{M,p}(a_{j})\leq \tau _{M,p}(a_{0})=\tau \) \((0< \tau < +\infty )\) for all \(j=1,\dots ,k-1\) and \ \begin{equation*} \max \left\{ \tau _{M,p}^{\ast }(a_{j}):j=1,\dots ,k-1\right\} < \tau _{M,p}^{\ast }(a_{0}). \end{equation*} Then all solutions \(f\not\equiv 0\) of \((1)\) satisfy \(\ \rho _{p}\left( f\right) =+\infty ,\) \(\rho _{p+1}\left( f\right) =\rho \) and \( d^{\rho }\tau \leq \tau _{p+1}(f)\leq \tau ,\) \(d\in \left( 0,1\right) \).

Proof. Suppose that all coefficients of the Equation (1) satisfy the hypotheses of Theorem 8. Now, let \(\alpha \) and \(\beta \) be two real numbers such that \begin{equation*} \max \left\{ \tau _{M,p}^{\ast }(a_{j}):j=1,\dots ,k-1\right\} < \beta < \alpha < \tau _{M,p}^{\ast }(a_{0}). \end{equation*} Because all coefficients are analytic, then for \(r\longrightarrow 1^{-}\)

\begin{equation} |a_{j}(z)|\leq \exp _{p-1}\left\{ \beta \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} ,\quad j=1,\dots ,k-1 \label{113} \end{equation}
(32)
and by Lemma 8, we have
\begin{equation} M\left( r,a_{0}\right) =|a_{0}(z)|>\exp _{p-1}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} \label{114} \end{equation}
(33)
for all \(r\in E\) (\(E\) is a set of infinite logarithmic measure). \ From (32) and (33), and by Theorem 5, we obtain the result.

Theorem 9. Let \(a_{0}(z),a_{1}(z),\dots ,a_{k-1}(z)\) be analytic functions in the unit disc \(\mathbb{D}\) satisfying \(\rho _{p}(a_{j})\leq \rho _{p}(a_{0})=\rho \) \((0< \rho < +\infty )\), \(\tau _{p}(a_{j})\leq \tau _{p}(a_{0})=\tau \) \((0< \tau < +\infty )\) for all \(j=1,\dots ,k-1\) and \ \begin{equation*} \max \left\{ \tau _{p}^{\ast }(a_{j}):j=1,\dots ,k-1\right\} < \tau _{p}^{\ast }(a_{0}). \end{equation*} Then all solutions \(f\not\equiv 0\) of \((1)\) satisfy \(\ \rho _{p}\left( f\right) =+\infty ,\) \(\rho _{p+1}\left( f\right) =\rho \) and \( d^{\rho }\tau \leq \tau _{p+1}(f)\leq \tau ,\) \(d\in \left( 0,1\right) \).

Proof. Suppose that all coefficients of the Equation (1) satisfy the hypotheses of Theorem 9. Now, let \(\alpha \) and \(\beta \) be two real numbers such that \begin{equation*} \max \left\{ \tau _{p}^{\ast }(a_{j}):j=1,\dots ,k-1\right\} < \beta < \alpha < \tau _{p}^{\ast }(a_{0}). \end{equation*} Since all coefficients are analytic, then for \(r\longrightarrow 1^{-}\)

\begin{equation} m\left( r,a_{j}\right) \leq \exp _{p-2}\left\{ \beta \exp \frac{\tau }{ \left( 1-r\right) ^{\rho }}\right\} ,\quad j=1,\dots ,k-1 \label{115} \end{equation}
(34)
and by Lemma 7, we have
\begin{equation} T\left( r,a_{0}\right) =m\left( r,a_{0}\right) >\exp _{p-2}\left\{ \alpha \exp \frac{\tau }{\left( 1-r\right) ^{\rho }}\right\} \label{116} \end{equation}
(35)
for all \(r\in E\) (\(E\) is a set of infinite logarithmic measure). From (34) and (35), and by Theorem 6, we obtain the result.

For some related results in the whole complex plane, see [18].

Acknowledgments

The authors would like to thank the referee for his/her valuables remarks, which led to an improvement of the presentation of this paper. This paper is supported by University of Mostaganem (UMAB) (PRFU Project Code C00L03UN270120180005).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The authors do not have any competing interests in the manuscript.

References

  1. Pommerenke, C. (1982). On the mean growth of the solutions of complex linear differential equations in the disk. Complex Variables and Elliptic Equations, 1(1), 23-38.[Google Scholor]
  2. Heittokangas, J. (2000). On complex differential equations in the unit disc (Vol. 122). Suomalainen Tiedeakatemia.[Google Scholor]
  3. Hayman, W. K. (1964). Meromorphic functions, Oxford mathematical monographs.[Google Scholor]
  4. Laine, I. (1993). Nevanlinna theory and complex differential equations (Vol. 15). Walter de Gruyter & Co., Berlin.[Google Scholor]
  5. Laine, I. (2008). Complex differential equations, Handbook of differential equations: ordinary differential equations. Vol. IV, 269--363, Handb. Differ. Equ., Elsevier/North-Holland} Amsterdam.[Google Scholor]
  6. Yang, C. C., & Yi, H. X. (2003). Uniqueness theory of meromorphic functions (Vol. 557). Springer Science & Business Media.[Google Scholor]
  7. Tsuji, M. (1975). Potential theory in modern function theory Chelsea Publishing Co., New York. [Google Scholor]
  8. Cao, T. B., & Yi, H. X. (2006). The growth of solutions of linear differential equations with coefficients of iterated order in the unit disc. Journal of Mathematical Analysis and Applications, 319(1), 278-294.[Google Scholor]
  9. Cao, T. B. (2009). The growth, oscillation and fixed points of solutions of complex linear differential equations in the unit disc. Journal of Mathematical Analysis and Applications, 352(2), 739-748.[Google Scholor]
  10. Heittokangas, J., Korhonen, R., & Rättyä, J. (2006). Fast growing solutions of linear differential equations in the unit disc. Results in Mathematics, 49(3-4), 265-278.[Google Scholor]
  11. Li, Y. Z. (2002). On the growth of the solution of two-order differential equations in the unit disc. Pure and Applied Mathematics, 4, 295-300.[Google Scholor]
  12. Chyzhykov, I., Gundersen, G. G., & Heittokangas, J. (2003). Linear differential equations and logarithmic derivative estimates. Proceedings of the London Mathematical Society, 86(3), 735-754.[Google Scholor]
  13. Hamouda, S. (2013). Iterated order of solutions of linear differential equations in the unit disc. Computational Methods and Function Theory, 13(4), 545-555.[Google Scholor]
  14. Bank, S. B. (1972). A general theorem concerning the growth of solutions of first-order algebraic differential equations. Compositio Mathematica, 25(1), 61-70.[Google Scholor]
  15. Belaidi, B. (2010). Oscillation of fast growing solutions of linear differential equations in the unit disc. Acta Universitatis Sapientiae, 2 (1), 25-38.[Google Scholor]
  16. Heittokangas, J., Korhonen, R., & Rattya, J.(2004). Growth estimates for solutions of linear complex differential equations, Annales Academiae Scientiarum Fennicae Mathematica, 29, 233-246.[Google Scholor]
  17. Hamouda, S. (2015). On the iterated order of solutions of linear differential equations in the complex plane. Southeast Asian Bulletin of Mathematics, 39(1), 45--55.[Google Scholor]
  18. Zemirni, M. A., & Belaidi, B. (2018). Linear differential equations with fast-growing coefficients in complex plane. Nonlinear Studies, 25(3), 719-731.[Google Scholor]
]]>
Analysis and numeric of mixed approach for frictional contact problem in electro-elasticity https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/analysis-and-numeric-of-mixed-approach-for-frictional-contact-problem-in-electro-elasticity/ Mon, 30 Mar 2020 20:54:15 +0000 https://old.pisrt.org/?p=3931
OMA-Vol. 4 (2020), Issue 1, pp. 20 - 37 Open Access Full-Text PDF
M. Bouallala, EL-H. Essoufi A. Zafrar
Abstract: This work handle a mathematical model describing the process of contact between a piezoelectric body and rigid foundation. The behavior of the material is modeled with a electro-elastic constitutive law. The contact is formulated by Signorini conditions and Coulomb friction. A new decoupled mixed variational formulation is stated. Existence and uniqueness of the solution are proved using elements of the saddle point theory and a fixed point technique. To show the efficiency of our approach, we present a decomposition iterative method and its convergence is proved and some numerical tests are presented.
]]>

Open Journal of Mathematical Analysis

Analysis and numeric of mixed approach for frictional contact problem in electro-elasticity

M. Bouallala\(^1\), EL-H. Essoufi A. Zafrar
Univ. Hassan 1, Laboratory MISI, 26000 Settat, Morocco.; (M.B & E.E & A.Z)
Cadi Ayyad University, Polydisciplinary Faculty, Department of Mathematics and Computer Science, B.P. 4162 Safi, Morocco.; (M.B)
\(^1\)Corresponding Author: bouallalamustaphaan@gmail.com

Abstract

This work handle a mathematical model describing the process of contact between a piezoelectric body and rigid foundation. The behavior of the material is modeled with a electro-elastic constitutive law. The contact is formulated by Signorini conditions and Coulomb friction. A new decoupled mixed variational formulation is stated. Existence and uniqueness of the solution are proved using elements of the saddle point theory and a fixed point technique. To show the efficiency of our approach, we present a decomposition iterative method and its convergence is proved and some numerical tests are presented.

Keywords:

Piezoelectricity, Coulomb friction, Signorini condition, Variational inequality, Fixed point process, Mixed variational formulation.

1. Introduction

The numerical study of the piezoelectric [1, 2] contact problems [3, 4, 5, 6, 7] presents a great challenges, because the non coercivity and non differentiability of some terms. The linear term coupling the mechanical field and electric potential is non coercive and non symmetric. The term corresponding to the friction is convex and non differentiable, in the variational formulation in one hand. In the other hand, the non-linear coupling mechanical field and frictional contact, it can be shown in the norm of the tangential component of the mechanical field present in the frictional function.

To overcome these difficulties, the authors develop some methods like finite elements method [8, 9], penalty method and fixed point method [4]. The most and sufficient method for this type of problem is the ones based on convex duality [10] and the introduction of Lagrange multipliers [11, 12, 13]. Primal-dual active sets strategy, which is an equivalent to infinite dimensional semismooth Newton method, is applied in [14] while in [15] the author propose a numerical approximation and based Uzawa block relaxation method. Alternating directions method of multipliers (ADMM) is based in [16].

In mechanic of structures one is always interested in the determination of the stress tensor \(\sigma\) more than the mechanical displacement itself. Methods have been developed for calculate an approximation of \(\sigma\) from \(u\) and the drawback is not easy to build an approximation space of tensors satisfying the equilibrium relations and required regularity. A mixed variational formulation have been developed to handle this difficulty. Concerning the piezoelectric contact problems, mixed formulation were developed in [17, 18, 19].

In this paper, we introduce a mixed variational approach based Lagrange multipliers which describes the static frictional contact between a piezoelectric body and non-conductive foundation. The standard mixed variational formulation of contact problems is formally in the following form [17, 18, 20]:

\begin{equation}\label{model}\begin{cases} a(u,v)+b(v,\lambda)=(f,v),\hspace{1cm}\forall\,\, v\in V, \\ b(u,\delta-\lambda)\leq 0,\hspace{1cm}\forall\,\, \delta \in \Lambda, \end{cases} \end{equation}
(1)
where \(a(\cdot,\cdot)\) is symmetric, coercive and the term \(b(\cdot,\cdot)\) coupling the normal and tangential Lagrange multipliers. This coupling is difficult to handle numerically and for model describing the contact problem with friction it is important to use the decoupling form to identify the slid and slip on the contact zone. Moreover, when the model (1) describe problem with electroelastic body the bi-linear form \(a(\cdot,\cdot)\) become not symmetric.

The idea consists in decoupling the contact from the friction by introducing two convex sets, one is reserved to the contact multiplier and the second is stated for the friction. This approach leads to decoupled inequalities in mixed variational problem. Since \(a(\cdot,\cdot)\) is not symmetric we follow the standards techniques and steps based on [17,18 ] but with different fixed point map (\(Step\, 3\) in the proof) and hence more analysis is needed to get the existence and uniqueness result. The resulting problem is an system by blocks of two unknowns (displacement and potential), regarding its structure and the form of blocks we use Gauss elimination technique and then we obtain a Schur complement in the matrix corresponding to displacement subproblem. It is well known (see [16, 21]) that this technique allows a suitable preconditioner for conjugate gradient method employed to solve the resulting symmetric and positive definite system.

To prove the efficiency of this approach, we state suitable numerical fixed point scheme. The convergence is proved basing abstract perturbed problem and fixed point process, Banach fixed point is hence applied. For details concerning the mathematical tools we refer to [22, 23, 24, 25].

The paper is structured as: In Section 2, we present the model of equilibrium process of the elastic piezoelectric body in frictional contact with a non-conductive foundation,we introduce the functional spaces for various quantities, list the assumptions on given data and derive the weak formulation of the problem. In Section, 2.2 we state and prove our main existence and uniqueness result, Theorem 1. The proof of these theorem are carried out in several steps and are based on an abstract result in the study of elliptic variational inequalities and Banach fixed point technique. The successive iterative method is detailed followed by the convergence result in Section 4. In Section 5, we conclude with finite element discretization and we give some numerical experiments by simple example .

2. Problem setting and main results

2.1. Problem setting

The piezoelectric body occupies in its reference configuration (initial configuration) the domain \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\). We suppose that \(\Omega\) is bounded with a smooth (enough) boundary \(\partial \Omega=\Gamma\). We denote by \(n\) be the outer normal to \(\Gamma\) and summation over repeated indices is implied and the index that follows a comma represents the partial derivative with respect to the corresponding component of the variable. The indices take values in \(\{1,\,\cdots,\,d\}\) the summation convention over repeated indices is used.

Below we use \(\mathbb{S}^{d}\) to denote the space of second order symmetric tensors on \(\mathbb{R}^{d}\) while "\(.\)" and \(\| \cdot \|\) will denote the inner product and the Euclidean norm on \(\mathbb{S}^{d}\) and \(\mathbb{R}^{d}\), that is \begin{eqnarray*}u. v=u_{i}v_{i}, \hspace{1cm}\|v\|=(v. v)^{\frac{1}{2}},\hspace{0.5cm}\forall\;\; u,v\in \mathbb{R}^{d},\\ \sigma. \tau=\sigma_{ij}\tau_{ij}, \hspace{1cm}\|\tau\|=(\tau.\tau)^{\frac{1}{2}},\hspace{0.5cm}\forall\;\; \sigma,\tau\in \mathbb{S}^{d}.\end{eqnarray*}

We also use the notations \(u_{n}\) and \(u_\tau\) for the normal and tangential displacement, that is \(u_{n}=u. n\) and \(u_\tau=u-u_{n}n\). Similarly we denote by \(\sigma_{n}\) and \(\sigma_\tau\) the normal and tangential stress tensor given by \(\sigma_{n}=\sigma n. n\), \(\sigma_\tau=\sigma n-\sigma_{n}n\).\\ We introduce the following functional spaces on \(\Omega\); \begin{eqnarray*} &&H= L^{2}(\Omega)^{d}=\left\lbrace u=(u_{i})\; | \;u_{i}\in L^{2}(\Omega)\right\rbrace ,\;\;\; \mathcal{H}= \left\lbrace \sigma = \sigma_{ij},\; \sigma_{ij}=\sigma_{ji}\in L^{2}(\Omega)\right\rbrace, \\ &&H_{1}=\left\lbrace u \in H\; | \;\epsilon(u) \in \mathcal{H}\right\rbrace ,\;\;\;\mathcal{H}_1= \left\lbrace \sigma \in \mathcal{H}\;|\; Div \;\sigma \in H \right\rbrace \end{eqnarray*} endowed with the inner products \begin{eqnarray*}&&(u,v)_{H}=\int_{\Omega} u_{i}v_{i}dx,\hspace{0.5cm} (\sigma,\tau)_{\mathcal{H}}=\int_{\Omega} \sigma_{ij}\tau_{ij}dx.\\ &&(u,v)_{H_{1}}= (u,v)_{H}+(\varepsilon(u),\varepsilon(v))_{\mathcal{H}},\hspace{0.5cm} (\sigma,\; \tau)_{\mathcal{H}_{1}}=(\sigma,\; \tau)_{\mathcal{H}}+(Div\; \sigma,\; Div\; \tau)_{H} .\end{eqnarray*}

The associated norms on the spaces \(H\), \(\mathcal{H}\), \(H_{1}\) and \(\mathcal{H}_{1}\) are denoted by \(\|.\|_{H}\), \(\|.\|_{\mathcal{H}}\), \(\|.\|_{H_{1}}\) and \(\|.\|_{\mathcal{H}_{1}}\) respectively. We recall the well known Green's formula $$ \left( \sigma,\varepsilon(v) \right)_{\mathcal{H}} + \left( Div \; \sigma,v \right)_{H} = \int_{\Gamma} \sigma n. v \; da \;\;\forall\;\; v \in H_{1}, $$ where \(Div \; \sigma = (\sigma_{ij,j})\) and for more details of this formula see [24].
In addition we shall use the following notations. \(u\) is the displacement field , \(\varepsilon (u)=(\varepsilon_{ij}(u))\) to denote the strain tensor, given by \(\varepsilon_{ij}(u)=\frac{1}{2}(u_{i,j}+u_{j,i})\) and \(\sigma =(\sigma_{ij})\) being the stress tensor. Lets \(\varphi\) denote the electric potential, \(E(\varphi)=(E_{i}(\varphi))\) is the electric field, which is defined by \(E_{i}(\varphi)=-\varphi_{,i}\) and \(D=(D_{i})\) is the electric displacement field.
The equilibrium equations are given by
\begin{equation}\label{2.1} -Div (\sigma) = f_{0}\; \text{ in } \Omega, \end{equation}
(2)
\begin{equation} div( D)=q_{0}\; \text{ in } \Omega, \end{equation}
(3)
where the constitutive relations for the piezoelectric material are:
\begin{equation}\label{2.3} \sigma=\mathcal{A}\varepsilon(u)-\mathcal{B}^{*}E(\varphi) \; \text{ in }\Omega, \end{equation}
(4)
\begin{equation} D=\mathcal{B}\varepsilon(u)+\beta E(\varphi) \; \text{ in }\Omega, \end{equation}
(5)
where \(\mathcal{A}=(a_{ijkl})\) is a (fourth-order) elasticity tensor, \(\mathcal{B}=(b_{ijk})\) is the (third-order) piezoelectric tensor, \(\mathcal{B}^{*}\) is the transpose of \(\mathcal{B}\) and \(\beta=(\beta_{ij})\) is the electric permitivity and \(div(D)=D_{i,i}\) (see [26]).
To give the mechanical and electrical boundary conditions, we subdivide \(\Gamma\) into three disjoints measurable parts \(\Gamma_{1},\Gamma_{2},\Gamma_{3}\) such that \(meas(\Gamma_{1})>0\). The body is assumed to be clamped on \(\Gamma_{1}\) and surfaces traction of density \(f_{2}\) act on \(\Gamma_{2}\), on \(\Gamma_{3}\) the body can reaches a frictional contact with the so called foundation (insulating foundation). A second partition of \(\Gamma\), that is \(\Gamma=\Gamma_{3}\cup\Gamma_{a}\cup \Gamma_{b}\). Surface electric charge of density \(q_{2}\) acts on \(\Gamma_{_{b}}\), and the electric potential vanishes on \(\Gamma_{b}\). We use the same symbol \(v\) for the trace of \(v\) on \(\Gamma\).
\begin{equation}\label{2.5} u=0\text{ on } \Gamma_{1}. \end{equation}
(6)
\begin{equation} \sigma n=f_{2} \text{ on } \Gamma_{2} \end{equation}
(7)
\begin{equation} \varphi=0\text{ on } \Gamma_{a}. \end{equation}
(8)
\begin{equation} D.n=q_{2}\text{ on }\Gamma_{b}. \end{equation}
(9)
The contact and the Coulomb friction conditions:
\begin{equation}\label{2.9} u_{n}-g \leq 0,\; \sigma_{n} \leq 0 \text{ and } \sigma_{n}(u_{n}-g)=0 \text{ on } \Gamma_{3}, \end{equation}
(10)
\begin{equation}\label{2.10} \left\lbrace \begin{array}{ccc} \text{If} \; u_{\tau}=0 \; \text{then} \; \| \sigma_{\tau}(u)\| \leq -\mathcal{F} \sigma_{n}(u) &\text{ on }& \Gamma_{3},\\ \text{If} \; u_{\tau} \neq 0 \; \text{then} \; \sigma_{\tau}(u)= \mathcal{F} \sigma_{n}(u)\dfrac{u_{\tau}}{ \| u_{\tau} \|} &\text{ on }& \Gamma_{3}, \end{array}\right. \end{equation}
(11)
where \(\mathcal{F}\) is the friction coefficient and \(g\) is the gap between the body and the rigid foundation. The electric contact condition is;
\begin{equation} D.n =0 \text{ on } \Gamma_{3}.\label{2.11} \end{equation}
(12)
To resume, we consider the following problem:

Problem 1. Find the displacement field \(u:\Omega\longrightarrow \mathbb{R}^{d}\) and the electric potential field \(\varphi:\Omega\longrightarrow \mathbb{R}\) such that (2)-(12) hold.

To study of Problem 1 we will assume, under Einstein summation convention, that:
\begin{equation}\label{2.12} \left\lbrace \begin{array}{cc} (a)&\mathcal{A}=(\mathcal{A}_{ijsl}):\Omega\times \mathbb{S}^{d}\longrightarrow\mathbb{S}^{d},\\ (b)&\mathcal{A}_{ijsl}=\mathcal{A}_{ijls}=\mathcal{A}_{lsij}\in L^{\infty}(\Omega),\\ (c)&\exists m_{\mathcal{A}}>0 \text{ such that: }\mathcal{A}_{ijsl}\varepsilon_{ij}\varepsilon_{ls}\geq m_{\mathcal{A}}|\varepsilon|^{2},\hspace{0.25cm}\varepsilon\in \mathbb{S}^{d},\;a.e.\;on \;\Omega, \end{array}\right. \end{equation}
(13)
\begin{equation}\label{2.13} \left\lbrace \begin{array}{cc} (a)&\mathcal{B}=(\mathcal{B}_{ijk}):\Omega\times \mathbb{S}^{d}\longrightarrow\mathbb{R}^{d},\\ (b)&\mathcal{B}_{ijk}=\mathcal{B}_{ikj}\in L^{\infty}(\Omega),\\ \end{array}\right. \end{equation}
(14)
\begin{equation}\label{2.14} \left\lbrace \begin{array}{cc} (a)&\beta=(\beta_{ij}):\Omega\times \mathbb{R}^{d}\longrightarrow\mathbb{R}^{d},\\ (b)&\beta_{ij}=\beta_{ji}\in L^{\infty}(\Omega),\\ (c)&\exists m_{\beta}>0 \text{ such that: }\beta_{ij}E_{i}E_{j}\geq m_{\beta}|E|^{2},\;E\in \mathbb{R}^{d},\;a.e.\;on\; \Omega, \end{array}\right. \end{equation}
(15)
\begin{equation}\label{2.15} f_{0}\in L^{2}(\Omega)^{d},\hspace{1cm}f_{2}\in L^{2}(\Gamma_{2})^{d}, \end{equation}
(16)
\begin{equation}\label{2.16} q_{0}\in L^{2}(\Omega),\hspace{1cm}q_{2}\in L^{2}(\Gamma_{b}). \end{equation}
(17)

Let us introduce the following Hilbert spaces: $$V=\left\lbrace v\in [H^{1}(\Omega)]^{d}/v=0\text{ on }\Gamma_{1}\right\rbrace, $$ $$W=\left\lbrace \varphi\in H^{1}(\Omega)/\varphi =0\text{ on }\Gamma_{a}\right\rbrace, $$ $$K=\left\lbrace v\in [H^{\frac{1}{2}}(\Gamma_{3})]^{d}/v_{n}\leq g\text{ on }\Gamma_{3}\right\rbrace.$$

If \(u\) and \(\varphi\) are regular functions which satisfy (2)-(10), then we find: $$\int_{\Omega}\mathcal{A}\varepsilon(u)\varepsilon(v)dx+\int_{\Omega}\mathcal{B}^*\nabla \varphi\varepsilon(v) dx=\int_{\Omega}f_{0}vdx+\int_{\Gamma_{2}}f_{2}vd\Gamma+\int_{\Gamma_{3}}(\sigma n).vd\Gamma$$

$$-\int_{\Omega}\mathcal{B}\varepsilon(u)\nabla \psi dx +\int_{\Omega}\beta \nabla \varphi\nabla \psi dx=\int_{\Omega}q_{0}\psi dx-\int_{\Gamma_{b}}q_{2}\psi d\Gamma .$$ Let us introduce the functional space \(\tilde{V}=V\times W\), which is the Hilbert space endowed with the inner product:
\((\tilde{u},\tilde{v})_{\tilde{V}}=(u,v)_{V}+(\varphi,\psi )_{W}\) where \(\tilde{u}=(u,\varphi ),\;\tilde{v}=(v,\psi )\in \tilde{V}\). Let \( a:\tilde{V}\times\tilde{V}\longrightarrow\mathbb{R}\) be the bi-linear form given by: \(\displaystyle{a(\tilde{u},\tilde{v})=\int_{\Omega}\mathcal{A}\varepsilon(u)\varepsilon(v)dx+\int_{\Omega}\mathcal{B}^*\nabla \varphi\varepsilon(v) dx -\int_{\Omega}\mathcal{B}\varepsilon(u)\nabla \psi dx +\int_{\Omega}\beta\nabla \varphi \nabla \psi dx}.\) Moreover, by Riesz's representation theorem, we define \(\tilde{f}\in\tilde{V}\) by: $$(\tilde{f},\tilde{v})_{\tilde{V}}:=\int_{\Omega}f_{0}vdx+\int_{\Gamma_{2}}f_{2}vd\Gamma +\int_{\Omega}q_{0}\psi dx-\int_{\Gamma_{b}}q_{2}\psi d\Gamma.$$ Using the previous tools, we find: $$a(\tilde{u},\tilde{v})=(\tilde{f},\tilde{v})_{\tilde{V}}+\int_{\Gamma_{3}}(\sigma n).vd\Gamma.$$ Since \((\sigma n).v=\sigma_\tau v_\tau+\sigma_{n}v_{n}\), then: $$a(\tilde{u},\tilde{v})=(\tilde{f},\tilde{v})_{\tilde{V}}+\int_{\Gamma_{3}}\sigma_\tau v_\tau+\sigma_{n}v_{n}d\Gamma.$$ Let \(H_\Gamma^{*}\) be the dual space of the space \(H_\Gamma=[H^{\frac{1}{2}}(\Gamma_{3})]^{d}\) and let us define
\begin{equation}\label{2.17} M_{T}(\lambda)=\left\lbrace \delta \in H_\Gamma^{*},\hspace{1cm}\left\langle \delta ,v_\tau \right\rangle_{H_\Gamma^{*},H_\Gamma}\leq\int_{\Gamma_{3}}\lambda|v_\tau|d\Gamma,\;v_\tau\in H_\Gamma \right\rbrace , \end{equation}
(18)
\begin{equation}\label{2.18} M_{N}=\left\lbrace \delta \in H_\Gamma^{*},\hspace{1cm}\left\langle \delta ,v \right\rangle_{\Gamma_{3}}\leq 0,\;v\in K_n \right\rbrace , \end{equation}
(19)
where \(\left\langle \cdot , \cdot\right\rangle_{H_\Gamma^{*},H_\Gamma}\) denotes the duality product between \(H_\Gamma^{*}\) and \(H_\Gamma\), \(K_n\) the set of the normal component of admissible displacement, i.e. \(K_n=\left\lbrace v_n, v_n\leq g\right\rbrace .\)

It is straightforward that \(M_{T,N}\) are two closed convex sets of \(H_\Gamma^{*}\) and \(0_{H_\Gamma^{*}}\in M_{N,T}\). We introduce two dual Lagrange multipliers \(\lambda_N\) and \(\lambda_T \in M\) as follows: $$\left\langle \lambda_N, v\right\rangle_{\Gamma_{3}}:=-\int_{\Gamma_{3}}\sigma_{n}v_{n}d\Gamma ,\hspace{0.5cm}v\in V, \; \text{and}\;\left\langle \lambda_T, v\right\rangle_{\Gamma_{3}}:=-\int_{\Gamma_{3}}\sigma_\tau v_\tau d\Gamma , \hspace{0.5cm}v\in V.$$

We define two bi-linear and continuous forms \(b_{1}\) and \(b_{2}\) for all \(v \in V,\; \delta_{1}, \delta_{2} \in H_\Gamma^{*}\) as follows: $$b_{1}: \tilde{V} \times H_\Gamma^{*} \longrightarrow \mathbb{R},\hspace{1cm}b_{1}(\tilde{v},\delta_{1}):=\left\langle \delta_{1} ,v \right\rangle_{\Gamma_{3}},$$ $$b_{2}:\tilde{V} \times H_\Gamma^{*} \longrightarrow \mathbb{R},\hspace{1cm}b_{2}(\tilde{v},\delta_{2}):=\left\langle \delta_{2} ,v \right \rangle_{\Gamma_{3}}.$$

We see that \(\displaystyle{b_{1}(\tilde{u},\lambda_N)=-\int_{\Gamma_{3}}\sigma_{n}u_{n}d\Gamma}\), and by definition of \(M_{N}\) we have: $$ b_{1}(\tilde{u},\delta_{1}-\lambda_N)\leq 0,\hspace{1cm} \forall\;\; \delta_{1} \in M_{N}. $$ Also, taking into account the definition of \(M_T\), \(\lambda_T\), and the assumption (10), we have: $$b_{2}(\tilde{u},\lambda_T)=\left\langle \lambda_T, u\right\rangle_{\Gamma_{3}}=-\int_{\Gamma_{3}}\sigma_\tau u_\tau d\Gamma.$$ Keeping in mind that the Sobolev trace operator is linear and continuous, it is clear that there exists \(M_{b_{i}}>0\) such that:
\begin{equation}\label{2.19} |b_{i}(\tilde{v},\delta_{i})|\leq M_{b_{i}}||\tilde{v}||_{\tilde{V}}||\delta_{i}||_{H_\Gamma^{*}},\;i=1,2. \end{equation}
(20)
In addition, using the properties of the Sobolev trace operator it can be shown that there exists \(\alpha_{i}>0\) such that:
\begin{equation}\label{2.20} \inf_{\delta_{i} \in H_\Gamma^{*} \setminus \{0\}}\sup_{\tilde{v} \neq 0}\dfrac{b_{i}(\tilde{v},\delta_{i})}{\|\tilde{v}\|_{\tilde{V}} \|\delta_{i}\|_{H_\Gamma^{*}}}\geq \alpha_{i},\;i=1,2. \end{equation}
(21)
The following weak formulation of Problem 1 is then obtained :

Problem 2.(Weak formulation of Problem Problem 1) Find \(\tilde{u}\in\tilde{V}\) and \(\lambda=(\lambda_N,\lambda_T)\in M_{N}\times M_{T}(\mathcal{F}\lambda_N)\) such that:

\begin{equation}\label{2.21} a(\tilde{u},\tilde{v})+b_{1}(\tilde{v},\lambda_N)+b_{2}(\tilde{v},\lambda_T)=(\tilde{f},\tilde{v})_{\tilde{V}},\hspace{1cm}\forall\;\; \tilde{v}\in\tilde{V}, \end{equation}
(22)
\begin{equation}\label{2.22} b_{1}(\tilde{u},\delta_{1}-\lambda_N)\leq 0,\hspace{1cm}\forall\;\; \delta_{1} \in M_{N}, \end{equation}
(23)
\begin{equation}\label{2.23} b_{2}(\tilde{u},\delta_{2} - \lambda_T)\leq 0,\hspace{1cm} \forall\;\; \delta_{2} \in M_{T}(\mathcal{F} \lambda_N). \end{equation}
(24)

2.2. Main results

In this section we present our main results.

Theorem 1. Assume (13)-(17), then the Problem 2 has unique solution \((\tilde{u},\lambda)\in\tilde{V}\times M\). Moreover if \((\tilde{u}_{1},\lambda)\) and \((\tilde{u}_{2},\beta)\) are two solutions of Problem 2 for given data \(\tilde{f}_{1}\) and \(\tilde{f}_{2}\) respectively, then $$||\tilde{u}_{1}-\tilde{u}_{2}||_{\tilde{V}}+||\lambda -\beta||_{H_\Gamma^{*} \times H_\Gamma^{*}}\leq C(||\tilde{f}_{1}-\tilde{f}_{2}||).$$

We denote \(b_{1}(v,\delta_{1})+b_{2}(v,\delta_{2})=\left\langle \delta_{1}, v\right\rangle_{\Gamma_{3}}+\left\langle \delta_{2}, v\right\rangle_{\Gamma_{3}}=\left\langle \delta_{1}+\delta_{2}, v\right\rangle_{\Gamma_{3}}\), hence there exists \(\alpha >0\) such that
\begin{equation}\label{2.24} \inf_{\delta \in H_\Gamma^{*} \setminus \{0\}}\sup_{v \neq 0}\dfrac{b(v ,\delta)}{||v||_{V}||\delta||_{H_\Gamma^{*}}} \geq \alpha. \end{equation}
(25)
where \(b(\cdot,\cdot)= b_{1}(\cdot,\cdot) + b_{2}(\cdot,\cdot):\tilde{V}\times M_{N} \times M_{T}\longrightarrow \mathbb{R}\). Now we introduce a numerical scheme to get numerically the solution of the Problem 2. The scheme is an fixed point iterative and is stated in the following Algorithm 1.

Proposition 1. Let \((u^{\ell},\lambda^\ell)\) be the solution generated by the Algorithm 1, then

\begin{equation} ||u^{\ell}-u||_{\tilde{V}}+\| \lambda^\ell -\lambda \|_{H_{\Gamma}^{*} \times H_{\Gamma}^{*}}\longrightarrow 0,\text{ as } \ell \longrightarrow +\infty. \end{equation}
(29)
The proof of the main results will be presented in the next section.

3. Proof of the main result

Let \(X\) and \(Y\) two Hilbert spaces endowed with the inner product \((\cdot,\cdot)_{X}\) and \((\cdot,\cdot)_{Y}\) respectively and let us consider two bi-linear forms as follows:
\(a(\cdot,\cdot):X\times X\longrightarrow\mathbb{R}\), generally non symmetric, such that
\begin{equation}\label{3.1} \exists\;\; M_{a}>0 \text{ such that }|a(u,v)|\leq M_{a}||u||_{X}||v||_{X},\hspace{1cm} \forall\;\; \;u,v\in X, \end{equation}
(30)
\begin{equation}\label{3.2} \exists \;\; m_{a}>0 \text{ such that } a(v,v)\geq m_{a}||v||^{2}_{X},\hspace{1cm} \forall\;\; \;v\in X, \end{equation}
(31)
and \(b(\cdot,\cdot):X\times Y\times Y\longrightarrow\mathbb{R}\), \(b(v,\lambda)=b_{1}(v,\lambda_N)+b_{2}(v,\lambda_T)\) such that
\begin{equation}\label{3.3} \exists\;\; M_{b}>0 \text{ such that }|b(v,\delta)|\leq M_{b}||v||_{X}||\delta||_{Y\times Y},\hspace{0.25cm} \forall\;\; \;(v,\delta)\in X\times Y\times Y, \end{equation}
(32)
\(\exists \;\;M_{b_{i}}>0 \) such that
\begin{equation}\label{3.4} |b_{i}(v,\delta)|\leq M_{b_{i}}||v||_{X}||\delta||_{Y\times Y},\;\hspace{0.25cm} \forall\;\; \;(v,\delta)\in X\times Y\times Y,\;i=1,2, \end{equation}
(33)
there exists \(\alpha >0\) such that
\begin{equation}\label{3.5} \inf_{\delta\in Y\times Y\setminus \{0\}}\sup_{v\in X\setminus \{0\}}\dfrac{b(v,\delta)}{||v||_{X}||\delta||_{Y\times Y}}\geq\alpha. \end{equation}
(34)
Now, let \(M=M_N \times M_T \subset Y\times Y\) be closed and convex set that contain \(0_{Y\times Y}\), we consider the following problem:

Problem 3. For given \(f\in X\), find \(u\in X\) and \(\lambda=(\lambda_N,\lambda_T)\in M\) such that:

\begin{equation} a(u,v)+b(v,\lambda)=(f,v)_{X},\;\forall\;\; v \in X, \end{equation}
(35)
\begin{equation} b_{1}(u,\delta-\lambda_N)\leq 0,\; \forall\; \delta\in M_{N}, \end{equation}
(36)
\begin{equation} b_{2}(u,\delta-\lambda_T)\leq 0,\;\forall\;\; \delta\in M_{T}. \end{equation}
(37)
We have the following result;

Theorem 2. Let \(f \in X\) and assume that (30)-(34) hold. Then, there exists a unique solution \((u,\lambda)\) of \textbf{Problem \(AWF\)} Moreover, if \((u_{1},\lambda)\) and \((u_{2},\gamma)\) are two solutions of the \textbf{Problem \(AWF\)} for given data functions \(f_{1}\in X\) and \(f_{2}\in X\) respectively, then, there exists \(SC>0\) such that:

\begin{equation}\label{3.9} ||u_{1}-u_{2}||_{X}+||\lambda-\beta||_{Y\times Y}\leq K(||f_{1}-f_{2}||_{X}). \end{equation}
(38)

Proof. We consider the symmetric \(a_{0}(.,.)\) and anti-symmetric \(c(.,.)\) part of \(a(.,.)\) respectively, defined by $$a_{0}:X\times X\longrightarrow \mathbb{R} ,\;\;a_{0}(u,v):=(a(u,v)+a(v,u))/2,\hspace{1cm}u, \forall\;\; v\in X,$$ $$c:X\times X\longrightarrow \mathbb{R} , \;\;c(u,v):=(a(u,v)-a(v,u))/2,\hspace{1cm}u,\forall\;\; v\in X.$$ For given \(0

\begin{equation}\label{3.10} a_t:X\times X\longrightarrow \mathbb{R} , \;\;a_{t}(u,v):=a_{0}(u,v)+t c(u,v),\hspace{1cm }\forall\;\; u,v\in X. \end{equation}
(39)
For all \( t \in [0,1] \), we note that $$a_{t}(v,v)\geq m_{a}||v||^{2}_{X},\hspace{1cm}|a_{t}(u,v)|\leq 2M_{a}||u||_{X}||v||_{X},\hspace{1cm} \forall\;\; u,v\in X.$$ Let us consider the following auxiliary perturbed problem:

Problem 4. (Auxiliary perturbed problem) For given \(f\in X\), find \(u\in X\) and \(\lambda \in M \), such that \begin{eqnarray} a_{t}(u,v)+b(v,\lambda)=(f,v)_{X},\hspace{1cm}\forall\;\; v\in _{X},\label{3.11} \\ b_{1}(u,\delta-\lambda_N)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,\label{3.12} \\ b_{2}(u,\delta-\lambda_T)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T.\label{3.13} \end{eqnarray}

The rest of the proof will be treated by steps.
Step 1.
If \(t=0\), the Problem 4 has unique solution. Indeed, if \(l=0\) the problem is equivalent to the saddle point problem: find \(u\in X\) and \(\lambda\in M\) such that $$\mathcal{L}(u,\delta)\leq \mathcal{L}(u,\lambda) \leq\mathcal{L}(v,\lambda),\hspace{1cm}\forall\;\; v\in X,\delta\in M,$$ where \(\mathcal{L}:X\times M \longrightarrow\mathbb{R}\) is defined by:\\ \(\begin{array}{ccc} \mathcal{L}(v,\delta)&:=& \dfrac{1}{2}a_{0}(v,v)-(f,v)_{X}+b_{2}(v,\delta_{2})+b_{1}(v,\delta_{1}) \\ &=& \dfrac{1}{2} a_{0}(v,v)-(f,v)_{X}+b(v,\delta), \end{array}\)\\ \(\mathcal{L}(.,.)\) has at least one solution, see [10], in fact, from \(\mathcal{L}(v,0)=\dfrac{1}{2}a_{0}(v,v)-(f,v)_{X}\) and the coercivity of \(a_{0}(.,.)\) we have $$\lim_{||v||_{X}\rightarrow +\infty}\mathcal{L}(v,0)=+\infty .$$ Moreover
\begin{equation}\label{3.14} \lim_{||\delta||_{Y\times Y}\rightarrow +\infty}\inf_{v\in X}\mathcal{L}(v,\delta)=-\infty\,. \end{equation}
(43)
Indeed, let \(\delta_{0}\) be an element of \(M\) and let \(u_{\delta_{0}}\in X\) be the solution of the equation
\begin{equation}\label{3.15} a_{0}(u_{\delta_{0}},v)+b(v,\delta_{0})=(f,v)_{X}, \hspace{1cm} \forall\;\; v\in X, \end{equation}
(44)
which is equivalent that \(u_{\delta_{0}}\) is the solution of the following minimization problem $$\inf_{v\in X}\mathcal{L}(v,\delta_{0})=\dfrac{1}{2}a_{0}(v,v)-(f,v)_{X}+b(v,\delta_{0}),$$ that is $$\dfrac{1}{2}a_{0}(u_{\delta_{0}},u_{\delta_{0}})-(f,u_{\delta_{0}})_{X}+b(u_{\delta_{0}},\delta_{0})=\inf_{v\in X}\mathcal{L}(v,\delta_{0}).$$ Substituting \(v=u_{\delta_{0}}\) in (44), we get $$\dfrac{1}{2}a_{0}(u_{\delta_{0}},u_{\delta_{0}})-(f,u_{\delta_{0}})_{X}+b(u_{\delta_{0}},\delta_{0})=-\dfrac{1}{2}a(u_{\delta_{0}},u_{\delta_{0}}),$$ which implies that
\begin{equation}\label{3.16} \inf_{v\in X}\mathcal{L}(v,\delta_{0})\leq\dfrac{-m_{a}}{2}||u_{\delta_{0}}||^{2}_{X}. \end{equation}
(45)
Additionally, using the inf-sup property of the form \(b(.,.)\); we deduce that there exists a constant \(C>0\) such that
\begin{equation}\label{3.17} \|\delta_{0}\|_{Y\times Y}\leq C(||f||_{X}+||u_{\delta_{0}}||_{X}). \end{equation}
(46)
From (45) we deduce (38), which implies the existence of solution of Problem 4. To show the uniqueness of the solution, let us assumes that \((u_{1},\lambda)\) and \((u_{2},\gamma)\) are two solutions of the problem $$a_{0}(u_{1},v)+b(v,\lambda)=(f,v)_{X},\hspace{1cm}\forall\;\; v \in X,$$ $$a_{0}(u_{2},v)+b(v,\gamma)=(f,v)_{X},\hspace{1cm}\forall\;\; v \in X.$$ By subtracting these two equations, we find $$a_{0}(u_{1}-u_{2},v)+b(v,\lambda)-b(v,\gamma)=0.$$ If we set \(v=u_{1}-u_{2}\), we get \(a_{0}(u_{1}-u_{2},u_{1}-u_{2})+b(u_{1}-u_{2},\lambda)-b(u_{1}-u_{2},\gamma)=0,\) \begin{eqnarray*} a_{0}(u_{1}-u_{2},u_{1}-u_{2})&=&-b(u_{1}-u_{2},\lambda)+b(u_{1}-u_{2},\gamma)\\ &=& b(u_{2}-u_{1},\lambda)+b(u_{1}-u_{2},\gamma)\\ &=& b_{1}(u_{2}-u_{1},\lambda_N)+b_{1}(u_{2}-u_{1},\lambda_T)+b_{2}(u_{1}-u_{2},\gamma_{1})+b_{2}(u_{1}-u_{2},\gamma_{2})\\ &=& b_{1}(u_{2}-u_{1},\lambda _{1}-\gamma_{1})+b_{2}(u_{2}-u_{1},\lambda_T-\gamma_{2})\leq 0 \end{eqnarray*} and by coercivity of \(a_{0}\), we have \(u_{1}=u_{2}\). Moreover $$0=-a_{0}(u_{1}-u_{2},v)=b(v,\lambda-\gamma),$$ and by inf-sup property of \(b(.,.)\), we have $$\alpha \|\lambda - \gamma \|_{Y \times Y}\leq \sup_{v\in X}\dfrac{b(v,\lambda-\gamma)}{||v||_{X}}=0,$$ and finally \(\lambda=\gamma\).
Step 2.
Assume now that \(f \in X\), there exists a unique solution \((u,\lambda) \in X \times M\) of the Problem 4, when we have two solutions \((u_{1},\lambda)\) and \((u_{2},\gamma)\) of Problem 4 corresponding to two given data \(f_{1}\in X \times X\) and \(f_{2}\in X \times X\) respectively, then
\begin{equation}\label{3.18} ||u_{1}-u_{2}||_{X}+||\lambda -\gamma||_{Y\times Y}\leq\dfrac{\alpha+m_{a}+2M_{a}}{\alpha m_{a}}||f_{1}-f_{2}||_{X}. \end{equation}
(47)
In fact $$a_{t}(u_{1}-u_{2},u_{1}-u_{2})=(f_{1}-f_{2},u_{1}-u_{2})_{X}+b(u_{1}-u_{2},\gamma-\lambda),$$ $$b_{1}(u_{1}-u_{2},\gamma_{1}-\lambda_N)\leq 0,$$ $$b_{2}(u_{1}-u_{2},\gamma_{2}-\lambda_T)\leq 0.$$ Since \(a_{t}\) is coercive and, hence
\begin{equation}\label{3.19} ||u_{1}-u_{2}||_{X} \leq \dfrac{1}{m_{a}}||f_{1}-f_{2}||_{X}. \end{equation}
(48)
In addition, \(b(v,\lambda -\gamma) =(f_{1}-f_{2},v)_{X}+a_{r}(u_{2}-u_{1},v)\) and by inf-sup property of \(b(.,.)\) we have $$\alpha||\lambda-\gamma||_{Y\times Y}\leq\sup_{v\in X \setminus\{0\}}\dfrac{b(v,\lambda-\gamma)}{||v||_{X}}\leq ||f_{1}-f_{2}||_{X}+2M_{a}||u_{1}-u_{2}||_{X},$$ that is
\begin{equation}\label{3.20} ||\lambda-\gamma||_{Y\times Y}\leq\dfrac{m_{a}+2M_{a}}{\alpha m_{a}} ||f_{1}-f_{2}||_{X}\,. \end{equation}
(49)
Hence (47) and (48) lead to
\begin{equation}\label{3.21} ||u_{1}-u_{2}||_{X}+||\lambda -\gamma||_{Y\times Y}\leq\dfrac{\alpha+m_{a}+2M_{a}}{\alpha m_{a}}||f_{1}-f_{2}||_{X}. \end{equation}
(50)
Step 3.
Let \(\tau \in [0,1]\). Assume for given \(f,g \in X\) there exists a unique solution of Problem 4 with \(t=\tau\), \((u,\lambda)\in X \times M\). Then for given \(f\in X\) there exists a unique solution \((u,\lambda)\in X\times M\) of Problem 4 with \(t\in [\tau;\tau+t_{0}]\subset [0,1]\), where: $$0< t_{0}< \dfrac{\alpha m_{a}}{M_{a}(\alpha+m_{a}+2M_{a})}< 1.$$ Indeed, given \(f\in X\), we define the operator \(T:X \times M \longrightarrow X \times M\) as follows \(T(w,\xi):=(u,\lambda)\) if \((u,\lambda)\) is the solution of the following problem:

Problem 5. For given \(f\in X\), find \(u\in X\) and \(\lambda \in M\), such that

\begin{equation} a_{\tau}(u,v)+b(v,\lambda)=(F_{s},v)_{X},\hspace{1cm}\forall\;\; v \in X,\label{3.22} \end{equation}
(51)
\begin{equation} b_{1}(u,\delta-\lambda_N)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,\label{3.23} \end{equation}
(52)
\begin{equation} b_{2}(u,\delta-\lambda_T)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T ,\label{3.24} \end{equation}
(53)
where \((F_{s},v)_{X}=(f,v)_{X}-(s-\tau)c(w,v),\hspace{1cm}\tau \leq s \leq \tau+t_{0} \leq 1\).

We will show that \(T\) is a contraction. To this end, we consider two pairs \((w_{1},\xi)\) and \((w_{2},\chi)\in X\times Y\times Y\). We have $$||T(w_{1},\xi)-T(w_{2},\chi)||_{X\times Y\times Y}=||u_{1}-u_{2}||_{X}+||\lambda -\beta||_{Y\times Y}.$$ By the same argument in (48) and by definition of \(F_{s}\)
\begin{equation}\label{3.25} ||\lambda-\gamma ||_{Y\times Y}\leq\dfrac{m_{a}+2M_{a}}{\alpha m_{a}} t_{0}M_{a}||w_{1}-w_{2}||_{X}\,. \end{equation}
(54)
In addition,
\begin{equation}\label{4.26} ||u_{1}-u_{2}||_{X} \leq\dfrac{1}{m_{a}}t_{0}M_{a}||w_{1}-w_{2}||_{X}, \end{equation}
(55)
and hence, $$||u_{1}-u_{2}||_{X}+||\lambda -\gamma||_{Y\times Y}\leq \dfrac{t_{0}M_{a}(\alpha+m_{a}+2M_{a})}{\alpha m_{a}}||w_{1}-w_{2}||_{X},$$ $$||u_{1}-u_{2}||_{X}+||\lambda -\gamma||_{Y\times Y} \leq \dfrac{t_{0}M_{a}(\alpha+m_{a}+2M_{a})}{\alpha m_{a}}||(w_{1},\xi)-(w_{2},\chi)||_{X\times Y\times Y},$$ which implies that \(T\) is a contraction and by Banach fixed theorem we conclude that \(T\) has unique fixed point.\\ Let \((\bar{u} ,\bar{\lambda})\) be the unique fixed point of \(T\), using the definition of the operator \(T\), we deduce that $$a_{\tau}(\bar{u},v)+b(v,\bar{\lambda})=(F_{s},v)_{X},\hspace{1cm}\forall\;\; v\in X,$$ $$b_{1}(\bar{u},\delta-\bar{\lambda}_N)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,$$ $$b_{2}(\bar{u},\delta-\bar{\lambda}_T)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T$$ and \((F_{s},v)_{X}=(f,v)_{X}-(s-\tau)c(\bar{u},v)\), for \(\tau \leq s \leq \tau+t_{0} \leq 1\).
We substitute \(F_{s}\) in the first equation, we find \((\bar{u},\bar{\lambda})\) be the unique fixed point of \(T\), using the definition of the operator \(T\), we deduce that $$a_{\tau}(\bar{u},v)+(s-\tau)c(\bar{u} ,v)+b(v,\bar{\lambda})=(f,v)_{X},\hspace{1cm}\forall\;\; v\in X,$$ $$b_{1}(\bar{u},\delta- \bar{\lambda}_N) \leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,$$ $$b_{2}(\bar{u},\delta-\bar{\lambda}_N) \leq 0,\hspace{1cm}\forall\;\; \delta\in M_T,$$ that is $$a_{0}( \bar{u},v)+sc(\bar{u},v)+b(v,\bar{\lambda})=a_{s}(\bar{u},v)+b(v,\bar{\lambda} )=(f,v)_{X},\hspace{1cm}\forall\;\; v\in X,$$ $$b_{1}(\bar{u}, \delta- \bar{\lambda}_N)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,$$ $$b_{2}(\bar{u} ,\delta- \bar{\lambda}_T )\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T,$$ which gives the existence of solution. In order to justify the uniqueness, let us assume that the problem with \(l=s\in [\tau,\tau+t_{0}]\) has two solutions \((u_{1},\lambda)\) and \((u_{2},\gamma)\), we have \begin{eqnarray*} a_{s}(u_{1}-u_{2},v)+b(v,\lambda-\gamma)&=&0\\ a_{s}(u_{1}-u_{2},u_{1}-u_{2})&=& b(u_{2}-u_{1},\lambda-\gamma)\\ &=& b_{1}(u_{2}-u_{1},\lambda_N-\beta_{1})+b_{2}(u_{2}-u_{1},\lambda_T-\gamma_{2})\leq 0, \end{eqnarray*} hence, by coercivity of \(a_{s}\), we get \(u_{1}=u_{2}\) and \(\lambda=\gamma\).
Step 4. Using Step 3, a finite number of times, we deduce that the Problem 4 admits a unique solution \((u,\lambda )\) for \(t=1\).
Step 5. In order to get (44), let us consider the data \(f_{1,2}\in X\) $$a(u_{1},v)+b(v,\lambda)=(f_{1},v)_{X},\hspace{1cm}\forall\;\; v \in X,$$ $$b_{1}(u_{1},\delta-\lambda_N)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,$$ $$b_{2}(u_{1},\delta-\lambda_T)\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T,$$ and $$a(u_{2},v)+b(v,\gamma)=(f_{2},v)_{X},\hspace{1cm}\forall\;\; v \in X,$$ $$b_{1}(u_{2},\delta-\gamma_{1})\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,$$ $$b_{2}(u_{2},\delta-\gamma_{2})\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T.$$ By subtracting this two equations, we find $$a(u_{1}-u_{2},v)+b(v,\lambda)-b(v,\gamma)=(f_{1}-f_{2},v)_{X}.$$ For \(v=u_{1}-u_{2}\), we have $$a(u_{1}-u_{2},u_{1}-u_{2})+b(u_{1}-u_{2},\lambda)-b(u_{1}-u_{2},\gamma)=(f_{1}-f_{2},u_{1}-u_{2})_{X},$$ which implies that \begin{eqnarray*} a(u_{1}-u_{2},u_{1}-u_{2})&=&b(u_{1}-u_{2},\gamma)-b(u_{1}-u_{2},\lambda)+(f_{1}-f_{2},u_{1}-u_{2})_{X}\\ &\leq& b_{2}(u_{1}-u_{2},\gamma_{2}-\lambda_T)+(f_{1}-f_{2},u_{1}-u_{2})_{X}\\ m_{a}||u_{1}-u_{2}||^{2}_{X}&\leq& ||f_{1}-f_{2}||_{X}||u_{1}-u_{2}||_{X},\end{eqnarray*} that is
\begin{equation}\label{3.27} m_{a}||u_{1}-u_{2}||^{2}_{X}\leq ||f_{1}-f_{2}||_{X}||u_{1}-u_{2}||_{X}, \end{equation}
(56)
and by (34), we have
\begin{equation}\label{3.28} \alpha ||\beta-\lambda||_{Y\times Y}\leq M_{a}||u_{1}-u_{2}||_{X}+||f_{1}-f_{2}||_{X}\;. \end{equation}
(57)
Using (56), we can write
\begin{equation}\label{3.29} m_{a}||u_{1}-u_{2}||^{2}_{X}\leq \frac{1}{2c_{1}}||f_{1}-f_{2}||_{X}^{2}+\frac{c_{1}}{2}||u_{1}-u_{2}||_{X}^{2}+\frac{c_{2}}{2}||\gamma-\lambda||_{Y\times Y}^{2}, \end{equation}
(58)
where \(c_{1}\), \(c_{2}\) are strictly positive constants. By combining this inequality and (57), we deduce that
\begin{equation}\label{3.30} \left( m_{a}-\frac{c_{1}}{2}-\frac{c_{2}M_{a}^{2}}{\alpha^{2}} \right) ||u_{1}-u_{2}||^{2}_{X}\leq \left( \frac{1}{2c_{1}}+\frac{c_{2}}{\alpha^{2}} \right) ||f_{1}-f_{2}||_{X}^{2}. \end{equation}
(59)
The constants \(c_{1}\) and \(c_{2}\) are chosen such that \( \left( m_{a}-\frac{c_{1}}{2}-\frac{c_{2}M_{a}^{2}}{\alpha^{2}} \right)\), we deduce that there exists \(c = c\left( m_{a}, M_{a}, M_{b}, \alpha \right) \) such that
\begin{equation}\label{3.31} ||u_{1}-u_{2}||_{X}\leq c \left( ||f_{1}-f_{2}||_{X} \right) . \end{equation}
(60)
Finally, combining (57) and (60), we have (47).

[Proof of Theorem 1 We consider \(X=\tilde{V}\), \(Y=H_\Gamma^{*}\) and \(M_N \times M_N \) given by (18). The subset \(M_N \times M_N\) is a non-empty, closed, convex of \(H_\Gamma^{*} \times H_\Gamma^{*}\) and \(0_{H_\Gamma^{*}} \in M\).
By using (12) and (15) we deduce that there exists \(M_{a} =M_{a}(\mathcal{A},\mathcal{E},\beta)>0\) and \(m_{a} =m_{a}(\mathcal{A},\beta)>0\) such that the bilinear form \(a(.,.)\) satisfies $$ | a(\tilde{u},\tilde{v})| \leq M_{a} \| \tilde{u}\|_{V}\| \tilde{v}\|_{\tilde{V}},\;\; \forall\;\; \tilde{u},\tilde{v} \in V, $$ %% $$ a(\tilde{u},\tilde{u}) \geq m_{a} \| \tilde{u}\|_{V}^{2}, \;\; \forall\;\; \tilde{u} \in V. $$ By the conditions (21) and (22) we deduce that the bilinear form \(b(.,.)\) satisfies (32). Using inf-sup property (34) and Theorem \ref{2} we find the result of Theorem 1.

Proof of Proposition 1 To prove the convergence result 1 of the Algorithm 1, lets reconsider the following perturbed problem, for \(l\in [\tau;\tau+t_{0}]\subset [0,1]\), where this time:

\begin{equation} 0
(61)
if \(M_b<1\). If \(M_b>1\) we take \(l\in [\tau+t_{0};\tau]\subset [0,1]\) with
\begin{equation} \frac{-\alpha m_aM_b}{M_a(\alpha+m_a+2M_a)}< t_{0}<\frac{\alpha m_a-\alpha m_aM_b}{M_a(\alpha+m_a+2M_a)}.\label{cond2} \end{equation}
(62)
Given \(f,g\in X\), we define the mapping \(T:X\times M \longrightarrow X \times M\) as follows \(T(w,\xi):=(u,\lambda)\) if \((u,\lambda)\) is the solution of the following fixed point problem:

Problem 6. (Fixed point problem) For given \(f,g\in X\), find \(u\in X\) and \(\lambda \in M\), such that

\begin{equation}\label{3.34} a_{\tau}(u,v)=(F_{s},v)_{X}-b(v,\xi),\hspace{1cm}\forall\;\; v\in X, \end{equation}
(63)
\begin{equation}\label{3.35} b_{1}(u,\delta-\lambda_N)\leq 0,\hspace{1cm}\forall\;\; \delta \in M_N, \end{equation}
(64)
\begin{equation}\label{3.36} b_{2}(u,\delta-\lambda_T)\leq 0,\hspace{1cm}\forall\;\; \delta \in M_T. \end{equation}
(65)
where \((F_{s},v)_{X}=(f,v)_{X}-(s-\tau)c(w,v)\) and \(\tau \leq s \leq \tau+t_{0} \leq 1\).

It is straightforward that
\begin{equation} ||u_{1}-u_{2}||_{X}+||\lambda -\gamma ||_{Y \times Y} \leq \frac{t_{0}M_{a}(\alpha+m_{a}+2M_{a})+\alpha m_a M_b}{\alpha m_{a}}||(w_{1},\xi)-(w_{2},\chi)||_{X \times Y \times Y}.\label{contra} \end{equation}
(66)
By the conditions (61)-(62) and (66), the operator \(T\) is a contraction. This implies that there exists \((\bar{u},\bar{\lambda} )\) such that \(T(\bar{u} ,\bar{\lambda} )=(\bar{u} ,\bar{\lambda} )\). Hence we have \(T(u^{\ell},\lambda^{\ell})=(u^{\ell+1},\lambda^{\ell+1})\) and the following scheme converges;

Problem 7. For given \(f\in X\), find \(u^{\ell+1}\in X\) and \(\lambda^{\ell+1} \in M\), such that

\begin{equation} a_{\tau}(u^{\ell+1},v)=(F_{s},v)_{X}-b(v,\lambda^{\ell}),\hspace{1cm}\forall\;\; v\in X,\label{3.38} \end{equation}
(67)
\begin{equation} b_{1}(u^{\ell+1},\delta-\lambda_N^{\ell+1})\leq 0,\hspace{1cm}\forall\;\; \delta\in M_N,\label{4.39} \end{equation}
(68)
\begin{equation} b_{2}(u^{\ell+1},\delta-\lambda_T^{\ell+1})\leq 0,\hspace{1cm}\forall\;\; \delta\in M_T.\label{4.40} \end{equation}
(69)
The convergence of the iterative fixed point scheme in the Algorithm 1 follows directly.

4. Discretization and numerics

The problem is now how to identify the multipliers \(\lambda_N\) and \(\lambda_T\) in the convex sets \(M_{1,2}\). One manner to do this, is the use of Projection maps. To this end, we consider the finite dimensional spaces \(V^{h}\subset V\), \(K^{h} = K \cap V^{h}\) and \(W^{h}\subset W\) approximating the spaces \(V\) and \(W\), respectively, in which \(h>0\) denotes the spatial discretization parameter. Let us define: $$ X_{n}^{h}=\left\lbrace v_{n| \Gamma_3}^{h}:v^{h}\in V^{h} \right\rbrace , \; X_{T}^{h}=\left\lbrace v_{\tau | \Gamma_3}^{h}:v^{h}\in V^{h} \right\rbrace , $$ and $$ X^{h}=\left\lbrace v_{|\Gamma_3}^{h}:v^{h}\in V^{h} \right\rbrace = X_{n}^{h}\times X_T^{h}. $$ Let us denote also \(X^{* h}_{n} \subset X^{*}_{n} \bigcap L^{2}(\Gamma_3)\) and \(X^{*h}_{T} \subset X^{*}_{T} \bigcap L^{2}(\Gamma_3;\mathbb{R}^{d-1})\) the finite discretizations of \(X^{*}_{n}\) and \(X^{*}_{T}\) respectively, such that the following discrete Babuska-Brezzi inf-sup conditions hold; $$ \inf_{\lambda^{h}_{T}\in X^{*h}_{T}} \sup_{v^{h}\in V^{h}}\dfrac{\left\langle \lambda^{h}_{T},v^{h}_\tau \right\rangle }{\| v^{h} \|_{V} \| \lambda^{h}_{T} \|_{X^{*h}_{T}} }\geq \alpha >0,\; \inf_{\lambda^{h}_{N}\in X^{*h}_{n}} \sup_{v^{h}\in V^{h}}\dfrac{\left\langle \lambda^{h}_{N},v^{h}_{n} \right\rangle }{\| v^{h} \|_{V} \| \lambda^{h}_{N} \|_{X^{*h}_{n}} }\geq \alpha >0, $$ with \(\alpha\) independent of \(h\). We consider the following discrete approximation of Problem 2:

Problem 8. Find \(\tilde{u}^{h}\in\tilde{V}^{h}\) and \(\lambda^{h}=(\lambda_{N}^{h},\lambda_{T}^{h})\in M_N^{h}\times M_T^{h}(\mu\lambda_N^h)\) such that: $$\left\lbrace \begin{array}{c} a(\tilde{u}^{h},\tilde{v}^{h})+b_{1}(\tilde{v}^{h},\lambda_N^{h})+b_{2}(\tilde{v}^{h},\lambda_T^{h})=(f^{h},\tilde{v}^{h})_{\tilde{V}},\;\; \forall\;\; \tilde{v}^{h}\in\tilde{V}^{h},\\ \lambda_N^{h}=P_{M_{N}^{h}}\left( \lambda_N^{h}-ru^{h}_{n} \right),\\ \lambda_T^{h}=P_{M_{T}^{h}(\mu\lambda_N^h)}\left( \lambda_T^{h}-ru^{h}_\tau \right), \end{array}\right. $$ where \(r>0\) and $$ M_{T}^{h}(\mu\lambda_N^h)=\left\lbrace \delta^{h} \in X^{* h}_{T},\hspace{1cm}\langle \delta^{h} ,v^{h} \rangle_{\Gamma_{3}}\leq\int_{\Gamma_{3}}\mu\lambda_N^h|v_\tau^{h}| d \Gamma,\;v^{h}\in H_\Gamma \right\rbrace , $$ $$ M_{N}^{h} = \left \lbrace \delta^{h} \in X^{* h}_{n},\hspace{1cm}\langle \delta^{h} ,v^{h} \rangle_{\Gamma_{3}}\leq 0,\;v^{h}\in K^{h}_n \right\rbrace , $$ and where \(P_{M}\) is the projection over \(M\). For more details we refer to [27].

4.1. Matrix formulation

In this section, we adopt the same technical discretization as in the work [27]. Let \(\textbf{a}_j\) (\(j=1,\ldots,n_c\)) be a contact node (i.e. \(\textbf{a}_j\) are the nodes forming \(\Gamma_3\)). The displacement vector at \(\textbf{a}_j\) is denoted by \(\textbf{u}_j\), i.e. \(\textbf{u}_j=u(\textbf{a}_j)\). Denoting \(\textbf{n}_j\) and \(\textbf{t}_j\), the unit outward normal vector to \(\Gamma_3\) and the unit tangential vector to \(\Gamma_3\), respectively. Let us introduce the linear mappings
  • \(\textbf{N} \,:\,\mathbb{R}^{2d}\rightarrow\mathbb{R}^{n_c}\), such that \((\textbf{N}\textbf{u})_j=\textbf{u}_j^\top \textbf{n}_j\), \(j=1,\ldots,n_c\).
  • \(\textbf{T}\,:\,\mathbb{R}^{2d}\rightarrow\mathbb{R}^{n_c}\), such that \((\textbf{T}\textbf{u})_j=\textbf{u}_j-(\textbf{u}_j^\top \textbf{n}_j)\textbf{n}_j=(\mathbb{I}_d-\textbf{n}_j\textbf{n}_j^\top)\textbf{u}_j\), \(j=1,\ldots,n_c\).
The finite element discretization leads to the following matrices and vectors:
  • \(\textbf{A}\), \((2d)\times (2d)\) the elastic matrix (symmetric and positive definite) ;
  • \(\textbf{B}\), \(d \times d\) electric potential stiffness matrix (symmetric positive definite);
  • \(\textbf{E}\), \(d \times (2d)\) coupling matrix;
  • \(\textbf{M}_{n}\) and \( \textbf{M}_\tau\) normal and tangential mass matrices (\(n_c\times n_c\));
  • \( \textbf{f}\) (the external forces in \(\mathbb{R}^{2d}\)), \(\textbf{q}\) (the external charges in \(\mathbb{R}^{d}\)).
  • \(\lambda_{N}\), \(\lambda_{T}\) the vectors associated to \(\lambda_N^h\) and \(\lambda_T^h\) respectively.
With the above notations, we can solve the Problem 8 with Coulomb friction using fixed point procedure on the friction threshold (see [27, 28] for more details). The Fixed point on the friction threshold procedure is in Uzawa Algorithm 2,
where $$\mathbf{U}=\left[\begin{array}{c} \textbf{u} \\ \varphi \end{array} \right] ,\; \mathcal{A}=\left[ \begin{array}{cc} \mathbf A & -\mathbf E^\top \\ \mathbf E & \mathbf B \end{array} \right] ,$$ $$b^\ell=\left[ \begin{array}{c} \mathbf b_1 \\ \mathbf b_2\end{array}\right] =\left[ \begin{array}{c} \mathbf f +\mathbf M_{n}\textbf{N}\lambda_{N}^{\ell} +\mathbf M_\tau\textbf{T}\lambda_{T}^{\ell} \\ \mathbf q \end{array}\right] ,$$ \(B(0,-\mathcal{F}\lambda_N^{\ell+1})\) denote the ball of center \(0\) and radius \(-\mathcal{F}\lambda_N^{\ell}>0\) and \(x^+\) denote the non negative part of \(x\) i.e. \(x^+=\max(0,x).\)

Remark 1. In practice, the Algorithm 2 is solved using Tresca friction with slip bound \(S^\ell\) and an fixed point iteration is used to compute the problem with Coulomb friction, i.e. \(S^{\ell+1}=-\mathcal{F}\lambda_N^{\ell}\).

To compute the solution of the system (70), we proceed by the following elimination technique:
\begin{equation} \varphi=\mathbf B^{-1}\mathbf E \textbf{u}+\mathbf B^{-1}\mathbf b_2,\label{subsystem2} \end{equation}
(73)
\begin{equation} \mathbf S_c \textbf{u}= \mathbf b_1-\mathbf B^{-1}\mathbf b_2,\label{subsystem1} \end{equation}
(74)
where \(\mathbf S_c\) is the Schur complement given by \(\mathbf S_c=\mathbf A -\mathbf E^\top \mathbf B^{-1} \mathbf E\). Since the matrices \(\mathbf A\) and \(\mathbf B^{-1}\) are symmetric and positive definite, the suitable method for solving the subsystem (74) is the Conjugate Gradient method (CG). We take profit from the Schur complement \(\mathbf S_c\) to obtain a convenient preconditioner, as discussed in [16] and reference therein the (CG)-preconditioner is \(\mathbf P=\mathbf A\). The preconditioned Conjugate Gradient method for solving the system (74) is stated in the Algorithm 3. Once \(\textbf{u}\) is computed one can compute \(\varphi\) by the explicit formula (73).

4.2. Numerical example

The algorithms are implemented in MATLAB on computer equipped running Windows 10 core i7 of 2.4GHz clock frequency and 6 GB RAM. As example, the domain consists of two-dimensional rectangular domain \(\Omega=(0,\;2)\times (0,\;1)\) as in the Figure 1, with boundaries \(\Gamma_D =\{0\}\times \left[0,\;1\right]\cup \{2\}\times \left[0,\;1\right]\), \(\Gamma_3 = \left[0,\;2\right]\times \{0\}\) and \(\Gamma_N =\left[0,\;2\right]\times \{1\}\). External body force and charge are \(f=0\) and \(q=0\), respectively. On \(\Gamma_D\) the displacements and the electric potential are prescribed, i.e., \(\textbf{u}=0\) and \(\varphi=0\) on \(\Gamma_D\). On \(\Gamma_N\), non-homogeneous Neumann boundary conditions are prescribed \(f_0=\sigma (u).n=-2\). On \(\{1\}\times(0,\;1)\) the homogeneous Neumann boundary condition is applied (\(\sigma n=0\) and \(Dn=0\)). For seek of simplicity, the normalized gap between \(\Gamma_3\) and the foundation is \(g(\mathbf x)=0\) and the friction coefficient is \(\mathcal{F}=0.6\). The mesh is generated by using Matlab function "kmg.m" built by the author in [29].
The deformed configuration is showed in the Figure 2 and the the contour plot of the electric potential distribution are showed in the Figure 3. The Figure 4 show the Lagrange multipliers \(\lambda_{T,N}\) in the contact zone \(\Gamma_3\) corresponding to the problem with Coulomb friction condition. The Figures 5 and 6 show the Lagrange multipliers \(\lambda_{T,N}\) with different choices of loads acting on \(\Gamma_2\), it is clear that the slide occurs when the load is important (large enough). The performance of the algorithm is presented in the Table 1, as showed the number of iterations is independent of mesh refinement and the time of execution is significant.

Figure 1. Initial configuration.

Figure 2. Deformed configuration.

Figure 3. Multipliers for \(\mathcal{F}=0.6\).

Figure 4. Contour plot of electic potential.

Figure 5. Multipliers with load \(-4\).

Figure 6. Multipliers with load \(-6\).

Table 1. Performance of the algorithms.
Mesh size \(h\) 1/32 1/64 1/128 1/256
Number of iterations of FP 3 4 4 4
Number of iterations of UA 39 39 39 39
Number of iterations of PCG 2 2 2 2
CPU time (seconds) 0.1341 2.0863 21.1882 238.4585

5. Conclusion

We have investigated numerical analysis of a model describing the process of contact between a piezoelectric body and non-conductive foundation. The behavior of the material is modeled with a electro-elastic constitutive law. The contact is formulated by Signorini conditions and Coulomb friction. In coming works, more general problem with non linear constitutive equation and non monotone friction will be treated. This is may be handled by fixed point iteration and hemivariational inequalities [30]migorski2012nonlinear}.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Arnau, A., & Soares, D. (2009). Fundamentals of piezoelectricity. In Piezoelectric transducers and applications (pp. 1-38). Springer, Berlin, Heidelberg.[Google Scholor]
  2. Katzir, S. (2006). The discovery of the piezoelectric effect. In the beginnings of piezoelectricity (pp. 15-64). Springer, Dordrecht.[Google Scholor]
  3. Bisegna, P., Maceri, F., & Lebon, F. (2002). The unilateral frictional contact of a piezoelectric body with a rigid support. In Contact mechanics (pp. 347-354). Springer, Dordrecht.[Google Scholor]
  4. Taik, A., Essoufi, E., Benkhira, E., & Fakhar, R. (2010). Analysis and numerical approximation of an electro-elastic frictional contact problem. Mathematical Modelling of Natural Phenomena, 5 (7), 84-90.[Google Scholor]
  5. Shillor, M., Sofonea, M., & Telega, J. J. (2004). Models and analysis of quasistatic contact: variational methods (Vol. 655). Springer Science & Business Media. [Google Scholor]
  6. Sofonea, M., & Essoufi, E. H. (2004). A piezoelectric contact problem with slip dependent coefficient of friction. Mathematical Modelling and Analysis, 9 (3), 229-242. [Google Scholor]
  7. Sofonea, M., & Matei, A. (2012). Mathematical models in contact mechanics (No. 398). Cambridge University Press.[Google Scholor]
  8. Barboteu, M., Fernández, J. R., & Ouafik, Y. (2008). Numerical analysis of two frictionless elastic-piezoelectric contact problems. Journal of mathematical analysis and applications, 339 (2), 905-917.[Google Scholor]
  9. Kikuchi, N., & Oden, J. T. (1988). Contact problems in elasticity: a study of variational inequalities and finite element methods (Vol. 8). Siam.[Google Scholor]
  10. Ekeland, I., & Temam, R. (1999). Convex analysis and variational problems (Vol. 28). Siam.[Google Scholor]
  11. Fortin, M., & Glowinski, R. (2000). Augmented Lagrangian methods: applications to the numerical solution of boundary-value problems. Elsevier.[Google Scholor]
  12. Glowinski, R., & Le Tallec, P. (1989). Augmented Lagrangian and operator-splitting methods in nonlinear mechanics (Vol. 9). SIAM.[Google Scholor]
  13. Glowinski, R., & Marroco, A. (1975). Sur l'approximation, par éléments finis d'ordre un, et la résolution, par pénalisation-dualité d'une classe de problèmes de Dirichlet non linéaires. ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique, 9 (R2), 41-76.[Google Scholor]
  14. Stadler, G. (2004). Semismooth Newton and augmented Lagrangian methods for a simplified friction problem. SIAM Journal on Optimization, 15 (1), 39-62.[Google Scholor]
  15. Essoufi, E. H., Fakhar, R., & Koko, J. (2015). A decomposition method for a unilateral contact problem with Tresca friction arising in electro-elastostatics. Numerical Functional Analysis and Optimization, 36 (12), 1533-1558.[Google Scholor]
  16. Essoufi, E. H., Koko, J., & Zafrar, A. (2017). Alternating direction method of multiplier for a unilateral contact problem in electro-elastostatics. Computers & Mathematics with Applications, 73 (8), 1789-1802.[Google Scholor]
  17. Hüeber, S., Matei, A., & Wohlmuth, B. I. (2005). A mixed variational formulation and an optimal a priori error estimate for a frictional contact problem in elasto-piezoelectricity. Bulletin mathématique de la Société des Sciences Mathématiques de Roumanie , 209-232.[Google Scholor]
  18. Matei, A. (2009). A Variational Approach for an Electro-Elastic Unilateral Contact Problem. Mathematical Modelling and Analysis, 14 (3), 323-334.[Google Scholor]
  19. Matei, A., & Sofonea, M. (2017). A mixed variational formulation for a piezoelectric frictional contact problem. IMA Journal of Applied Mathematics, 82 (2), 334-354.[Google Scholor]
  20. Haslinger, J., & Sassi, T. (2004). Mixed finite element approximation of 3D contact problems with given friction: error analysis and numerical realization. ESAIM: Mathematical Modelling and Numerical Analysis, 38(3), 563-578.[Google Scholor]
  21. Mandel, J. (1990). On block diagonal and Schur complement preconditioning. Numerische Mathematik, 58 (1), 79-93.[Google Scholor]
  22. Brezzi, F., & Fortin, M. (2012). \emph{Mixed and hybrid finite element methods} (Vol. 15). Springer Science & Business Media.[Google Scholor]
  23. Duvant, G., & Lions, J. L. (2012). Inequalities in mechanics and physics (Vol. 219). Springer Science & Business Media.[Google Scholor]
  24. Rabel, R. G., Han, W., & Sofonea, M. (2002). Quasistatic contact problems in viscoelasticity and viscoplasticity . American Mathematical Soc..[Google Scholor]
  25. Kinderlehrer, D., & Stampacchia, G. (1980). An introduction to variational inequalities and their application (Vol. 31). Siam.[Google Scholor]
  26. Voigt, W. (1928). Lehrbuch der kristallphysik (Vol. 962). Leipzig: Teubner.[Google Scholor]
  27. Khenous, H. B., Pommier, J., & Renard, Y. (2004). Hybrid discretization of the Signorini problem with Coulomb friction. \emph{Theoretical aspects and comparison of some numerical solvers. A paraître Numerical Applied Mathematics. [Google Scholor]
  28. Laborde, P., & Renard, Y. (2008). Fixed point strategies for elastostatic frictional contact problems. Mathematical methods in the applied sciences, 31 (4), 415-441.[Google Scholor]
  29. Koko, J. (2015). A Matlab mesh generator for the two-dimensional finite element method. Applied Mathematics and Computation, 250 , 650-664.[Google Scholor]
  30. Migórski, S., Ochal, A., & Sofonea, M. (2012). Nonlinear inclusions and hemivariational inequalities: models and analysis of contact problems (Vol. 26). Springer Science & Business Media.[Google Scholor]
]]>
Study of asymptotic behavior of solutions of neutral mixed type difference equations https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/study-of-asymptotic-behavior-of-solutions-of-neutral-mixed-type-difference-equations/ Mon, 30 Mar 2020 20:44:22 +0000 https://old.pisrt.org/?p=3929
OMA-Vol. 4 (2020), Issue 1, pp. 11 - 19 Open Access Full-Text PDF
Manel Gouasmia, Abdelouaheb Ardjouni, Ahcene Djoudi
Abstract: In this paper, we consider a neutral mixed type difference equation, and obtain explicitly sufficient conditions for asymptotic behavior of solutions. A necessary condition is provided as well. An example is given to illustrate our main results.
]]>

Open Journal of Mathematical Analysis

Study of asymptotic behavior of solutions of neutral mixed type difference equations

Manel Gouasmia, Abdelouaheb Ardjouni\(^1\), Ahcene Djoudi
Applied Mathematics Lab, Faculty of Sciences, Department of Mathematics, Univ Annaba, P.O. Box 12, Annaba 23000, Algeria.; (M.G & A.D)
Faculty of Sciences and Technology, Department of Mathematics and Informatics, Univ Souk Ahras, P.O. Box 1553, Souk Ahras, 41000, Algeria.; (A.A)
\(^1\)Corresponding Author: abd_ardjouni@yahoo.fr

Abstract

In this paper, we consider a neutral mixed type difference equation, and obtain explicitly sufficient conditions for asymptotic behavior of solutions. A necessary condition is provided as well. An example is given to illustrate our main results.

Keywords:

Contraction mapping, neutral difference equations, mixed type, asymptotic behavior.

1. Introduction

Certainly, the Lyapunov direct method has been, for more than 100 years, the efficient tool for the study of stability properties of ordinary, functional, partial differential and difference equations. Nevertheless, the application of this method to problems of stability in differential and difference equations with delay has encountered serious difficulties if the delay is unbounded or if the equation has unbounded terms ([1, 2, 3, 4, 5, 6, 7, 8 9, 10, 11, 12, 13, 14, 15, 16]). Recently, Burton, Furumochi, Zhang, Raffoul, Islam, Yankson and others have noticed that some of these difficulties vanish or might be overcome by means of fixed point theory (see [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]). The fixed point theory does not only solve the problem on stability but has a significant advantage over Lyapunov's direct method. The conditions of the former are often averages but those of the latter are usually pointwise (see [1]). In this paper, we consider the following mixed type neutral difference equation
\begin{equation} \Delta x\left( t\right) +a\left( t\right) \Delta x\left( \tau \left( t\right) \right) +\sum_{i=1}^{k}b_{i}\left( t\right) x\left( \sigma _{i}\left( t\right) \right) +\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) =0, \label{1} \end{equation}
(1)
with an assumed initial condition
\begin{equation} x(t)=\psi (t)\text{ for }t\in \lbrack m\left( t_{0}\right) ,t_{0}]\cap \mathbb{Z}, \label{2} \end{equation}
(2)
where \(\psi :[m\left( t_{0}\right) ,t_{0}]\cap \mathbb{Z}\rightarrow \mathbb{\mathbb{R}}\) is a bounded sequence and for \(t_{0}\geq 0\) \begin{equation*} m\left( t_{0}\right) =\inf \{\sigma _{i}(s):s\geq t_{0},\ i=1,...k\}. \end{equation*} Here \(\Delta \) denotes the forward difference operator \(\Delta x(t)=x(t+1)-x(t)\) for any sequence \(\left\{ x\left( t\right) ,\ t\in \mathbb{ Z}^{+}\right\} \). For more details on the calculus of difference equations, we refer the reader to [11] and [24]. Throughout this paper, we assume that \(a\), \(b_{i}\) and \(c_{j}\) are bounded sequences, and \(\tau \), \( \sigma _{i}\) and \(\tau _{j}\) are non-negative sequences such that \begin{eqnarray*} \tau \left( t\right) &\rightarrow &\infty \text{ as }t\rightarrow \infty ,\ \tau \left( t\right) \geq t,\ t\geq t_{0}, \\ \sigma _{i}\left( t\right) &\rightarrow &\infty \text{ as }t\rightarrow \infty ,\ i=1,...,k,\ \sigma _{i}\left( t\right) \leq t,\ t\geq t_{0}, \\ \tau _{j}\left( t\right) &\rightarrow &\infty \text{ as }t\rightarrow \infty ,\ j=1,...,l,\ \tau _{j}\left( t\right) \geq t,\ t\geq t_{0}. \end{eqnarray*} Equation (1) can be viewed as a discrete analogue of the mixed type neutral differential equation;
\begin{equation} x^{\prime }\left( t\right) +a\left( t\right) x^{\prime }\left( \tau \left( t\right) \right) +\sum_{i=1}^{k}b_{i}\left( t\right) x\left( \sigma _{i}\left( t\right) \right) +\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) =0. \label{66} \end{equation}
(3)
In [25], Bicer investigated (3) and obtained the asymptotic behavior of solutions. Our purpose here is to show the asymptotic behavior of solutions for (1). An asymptotic stability theorem with a necessary and sufficient condition is proved by using the contraction mapping theorem. For details on contraction mapping principle we refer the reader to [33] . An example is given to illustrate our main results.

2. Main results

Theorem 1. Let \(a\), \(b_{i}\) and \(c_{j}\) non positive sequences. Assume that the following inequality has a nonnegative solution \begin{equation*} -a\left( t\right) \lambda \left( \tau \left( t\right) \right) \prod_{u=t}^{\tau \left( t\right) -1}\left( 1-\lambda \left( u\right) \right) -\sum_{i=1}^{k}b_{i}\left( t\right) \prod_{u=t}^{\sigma _{i}\left( t\right) -1}\left( 1-\lambda \left( u\right) \right) \notag -\sum_{j=1}^{l}c_{j}\left( t\right) \prod_{u=t}^{\tau _{j}\left( t\right) -1}\left( 1-\lambda \left( u\right) \right) \leq\lambda \left( t\right) ,\ t\geq t_{0}, \label{3} \end{equation*} with \(\lambda \left( t\right) < 1\). Then, (1) has a positive solution.

Proof. Let \(\lambda _{0}\) be a nonnegative solution of (\ref{3}). Set \begin{equation*} \lambda _{n}\left( t\right) =\left\{ \begin{array}{l} \lambda _{n-1}\left( t\right) ,\text{ \ \ if }m\left( t_{0}\right) \leq t\leq t_{0}, \\ -a\left( t\right) \lambda _{n-1}\left( \tau \left( t\right) \right) \prod\limits_{u=t}^{\tau \left( t\right) -1}\left( 1-\lambda _{n-1}\left( u\right) \right) -\sum\limits_{i=1}^{k}b_{i}\left( t\right) \prod\limits_{u=t}^{\sigma _{i}\left( t\right) -1}\left( 1-\lambda _{n-1}\left( u\right) \right) \\ -\sum\limits_{j=1}^{l}c_{j}\left( t\right) \prod\limits_{u=t}^{\tau _{j}\left( t\right) -1}\left( 1-\lambda _{n-1}\left( u\right) \right) ,\text{ }t\geq t_{0}, \end{array} \right. \end{equation*} for \(n=1,2,...\). Then, by (\ref{3}), we get \begin{equation*} \lambda _{0}\left( t\right) \geq -a\left( t\right) \lambda _{0}\left( \tau \left( t\right) \right) \prod_{u=t}^{\tau \left( t\right) -1}\left( 1-\lambda _{0}\left( u\right) \right) -\sum_{i=1}^{k}b_{i}\left( t\right) \prod_{u=t}^{\sigma _{i}\left( t\right) -1}\left( 1-\lambda _{0}\left( u\right) \right) -\sum_{j=1}^{l}c_{j}\left( t\right) \prod_{u=t}^{\tau _{j}\left( t\right) -1}\left( 1-\lambda _{0}\left( u\right) \right) =\lambda _{1}\left( t\right) . \end{equation*} Then, we obtain \(\lambda _{0}(t)\geq \lambda _{1}(t)\geq ...\geq \lambda _{n}(t)\geq 0\). So, there exists a pointwise limit \(\lambda (t)=\lim\limits_{n\rightarrow \infty }\lambda _{n}(t)\). So, from the Lebesgue convergence theorem, we obtain \begin{equation*} \lambda \left( t\right) =-a\left( t\right) \lambda \left( \tau \left( t\right) \right) \prod_{u=t}^{\tau \left( t\right) -1}\left( 1-\lambda \left( u\right) \right) -\sum_{i=1}^{k}b_{i}\left( t\right) \prod_{u=t}^{\sigma _{i}\left( t\right) -1}\left( 1-\lambda \left( u\right) \right) -\sum_{j=1}^{l}c_{j}\left( t\right) \prod_{u=t}^{\tau _{j}\left( t\right) -1}\left( 1-\lambda \left( u\right) \right) . \end{equation*} Hence, \begin{equation*} x\left( t\right) =\left\{ \begin{array}{l} \lambda \left( t\right) ,\text{ \ \ if }m\left( t_{0}\right) \leq t\leq t_{0}, \\ \lambda \left( t_{0}\right) \prod\limits_{u=t_{0}}^{t-1}\left( 1-\lambda \left( u\right) \right) ,\ t\geq t_{0}, \end{array} \right. \end{equation*} is a positive solution of (1).

Theorem 2. Let \(a\), \(b_{i}\) and \(c_{j}\) be non positive sequences and let \( \Delta a(t)>0\), \(a\left( t_{0}\right) \neq -\infty \). If \begin{equation*} \sum_{u=t_{0}}^{\infty }\sum_{j=1}^{l}c_{j}\left( u\right) =-\infty , \end{equation*} and \(x\) is a eventually positive solution of (1), then \( x(t)\rightarrow \infty \) as \(t\rightarrow \infty \).

Proof. Assume that \(x(t)>0\) for \(t\geq T_{1}\). Choose \(T\geq T_{1}\) such that \( T_{1}\leq \inf \{\sigma _{i}(s):s\geq T,\ i=1,...,k\}\). Then \(\Delta x(t)+a(t)\Delta x(\tau (t))\geq 0\), for \(t\geq T\), \begin{equation*} \Delta x\left( t\right) +a\left( t\right) \Delta x\left( \tau \left( t\right) \right) =-\sum_{i=1}^{k}b_{i}\left( t\right) x\left( \sigma _{i}\left( t\right) \right) -\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) , \end{equation*} and \begin{equation*} \Delta \left[ a(t)x\left( \tau \left( t\right) \right) \right] =a\left( t\right) \Delta x\left( \tau \left( t\right) \right) +\Delta a\left( t\right) x\left( \tau \left( t+1\right) \right) , \end{equation*} that is \begin{equation*} \Delta \left[ x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \right] -\Delta a\left( t\right) x\left( \tau \left( t+1\right) \right) \geq -\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) . \end{equation*} From this, we can write \begin{equation*} \Delta \left[ x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \right] \geq -\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) , \end{equation*} so \begin{equation*} \Delta \left[ x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \right] \geq -x\left( T\right) \sum_{j=1}^{l}c_{j}\left( t\right) , \end{equation*} which implies \begin{equation*} x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \geq a(t_{0})x\left( \tau \left( t_{0}\right) \right) -x\left( T\right) \sum_{u=t_{0}}^{t-1}\sum_{j=1}^{l}c_{j}\left( u\right) . \end{equation*} So, we get \begin{equation*} x\left( t\right) \geq a(t_{0})x\left( \tau \left( t_{0}\right) \right) -x\left( T\right) \sum_{u=t_{0}}^{t-1}\sum_{j=1}^{l}c_{j}\left( u\right) . \end{equation*} Then \(x(t)\rightarrow \infty \) as \(t\rightarrow \infty \).

Theorem 3. Let \(a(t)>0\), \(b_{i}\) and \(c_{j}\) be nonnegative sequences and let \(\Delta a(t)< 0\), \(a\left( t_{0}\right) \neq \infty \). If \begin{equation*} \sum_{u=t_{0}}^{\infty }\sum_{j=1}^{l}c_{j}\left( u\right) =\infty , \end{equation*} and \(x\) is a eventually positive solution of (1), then \( x(t)\rightarrow 0\) as \(t\rightarrow \infty \).

Proof. For \(t\geq T_{1}\), since \(x(t)>0\) we Choose \(T\geq T_{1}\) such that \( T_{1}\leq \inf \{\sigma _{i}(s):s\geq T,\ i=1,...,k\}\). Then \(\Delta x(t)+a(t)\Delta x(\tau (t))\leq 0\), for \(t\geq T\), and \begin{equation*} \Delta x\left( t\right) +a\left( t\right) \Delta x\left( \tau \left( t\right) \right) \leq -\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) , \end{equation*} that is \begin{equation*} \Delta \left[ x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \right] -\Delta a\left( t\right) x\left( \tau \left( t+1\right) \right) \leq -\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) . \end{equation*} From this, we can write \begin{equation*} \Delta \left[ x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \right] \leq -\sum_{j=1}^{l}c_{j}\left( t\right) x\left( \tau _{j}\left( t\right) \right) , \end{equation*} so \begin{equation*} \Delta \left[ x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \right] \leq -x\left( T\right) \sum_{j=1}^{l}c_{j}\left( t\right) , \end{equation*} which implies \begin{equation*} x\left( t\right) +a(t)x\left( \tau \left( t\right) \right) \leq a(t_{0})x\left( \tau \left( t_{0}\right) \right) -x\left( T\right) \sum_{u=t_{0}}^{t-1}\sum_{j=1}^{l}c_{j}\left( u\right) . \end{equation*} So, we get \begin{equation*} x\left( t\right) \leq a(t_{0})x\left( \tau \left( t_{0}\right) \right) -x\left( T\right) \sum_{u=t_{0}}^{t-1}\sum_{j=1}^{l}c_{j}\left( u\right) . \end{equation*} Since \(x(t)>0\), we get a contradiction.Then \(x(t)\rightarrow 0\) as \( t\rightarrow \infty \).

Now, we investigate the asymptotic behavior of solutions of (1), free of the sign of the coefficients. During the process of inverting (1), an summation by parts will have to performed on the term involving \(\Delta x(\tau (t))\).

Lemma 1. A sequence \(x\) is a solution of (1)--(2) if and only if

\begin{eqnarray} x\left( t\right) & =&\left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \prod_{s=t_{0}}^{t-1}\left( 1-B(s)\right) -a(t-1)x(\tau \left( t\right) ) \notag \\ & &+\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) h\left( r\right) x\left( \tau \left( r\right) \right) -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) B\left( r\right) x\left( r+1\right) \notag \\ && -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{i=1}^{k}b_{i}\left( r\right) x\left( \sigma _{i}\left( r\right) \right) -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{j=1}^{l}c_{j}\left( r\right) x\left( \tau _{j}\left( r\right) \right) , \label{5} \end{eqnarray}
(4)
for \(t\geq t_{0}\), where \begin{equation*} B\left( t\right) =\sum_{i=1}^{k}b_{i}\left( t\right) +\sum_{j=1}^{l}c_{j}\left( t\right) ,\ 0 < B\left( t\right) < 1, \end{equation*} and
\begin{equation} h\left( t\right) =a\left( t\right) -a\left( t-1\right) \left( 1-B(t)\right) . \label{h1} \end{equation}
(5)

Proof. Since \begin{eqnarray*} x\left( \tau _{j}\left( t\right) \right) =x\left( t+1\right) +\sum_{u=t+1}^{\tau _{j}\left( t\right) -1}\Delta x\left( u\right) , \text{ and } x\left( \sigma _{i}\left( t\right) \right) =x\left( t+1\right) +\sum_{u=t+1}^{\sigma _{i}(t)-1}\Delta x\left( u\right). \end{eqnarray*} We can rewrite (1) as

\begin{align} \Delta x\left( t\right) & =-a\left( t\right) \Delta x\left( \tau \left( t\right) \right) -\sum_{i=1}^{k}b_{i}\left( t\right) \sum_{u=t+1}^{\sigma _{i}(t)-1}\Delta x\left( u\right) -\sum_{j=1}^{l}c_{j}\left( t\right) \sum_{u=t+1}^{\tau _{j}(t)-1}\Delta x\left( u\right) -B\left( t\right) x\left( t+1\right) . \label{6} \end{align}
(6)
Multiplying both sides of (6) with \(\prod\limits_{s=t_{0}}^{t}\left( 1-B(s\right) )^{-1}\), by summing from \(t_{0}\) to \(t-1\), we obtain \begin{align*} & \underset{r=t_{0}}{\overset{t-1}{\sum }}\Delta \left[ \prod \limits_{s=t_{0}}^{r-1}\left( 1-B(s\right) )^{-1}x\left( r\right) \right] =-\sum_{r=t_{0}}^{t-1}\prod_{s=t_{0}}^{r}\left( 1-B(s\right) )^{-1}a\left( r\right) \Delta x\left( \tau \left( r\right) \right) -\sum_{r=t_{0}}^{t-1}\prod_{s=t_{0}}^{r}\left( 1-B(s\right) )^{-1}B\left( r\right) x\left( r+1\right) \\ &\,\,\, -\sum_{r=t_{0}}^{t-1}\prod_{s=t_{0}}^{r}\left( 1-B(s\right) )^{-1}\sum_{i=1}^{k}b_{i}\left( t\right) \sum_{u=t+1}^{\sigma _{i}(r)-1}\Delta x\left( u\right) -\sum_{r=t_{0}}^{t-1}\prod_{s=t_{0}}^{r}\left( 1-B(s\right) )^{-1}\sum_{j=1}^{l}c_{j}\left( t\right) \sum_{u=t+1}^{\tau _{j}(r)}\Delta x\left( u\right) . \end{align*} By dividing both sides of the above expression by \(\prod \limits_{s=t_{0}}^{t-1}\left( 1-B(s\right) )^{-1}\) we get
\begin{eqnarray} x\left( t\right) & =&x\left( t_{0}\right) \prod_{s=t_{0}}^{t-1}\left( 1-B(s)\right) -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s\right) )a\left( r\right) \Delta x\left( \tau \left( r\right) \right) -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) B\left( r\right) x\left( r+1\right) \notag \\ && -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s\right) )\sum_{i=1}^{k}b_{i}\left( r\right) \sum_{u=t+1}^{\sigma _{i}(r)-1}\Delta x\left( u\right) -\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s\right) )\sum_{j=1}^{l}c_{j}\left( r\right) \sum_{u=t+1}^{\tau _{j}(r)-1}\Delta x\left( u\right) . \label{7} \end{eqnarray}
(7)
By performing an summation by parts, we get
\begin{align} & \sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s\right) )a\left( r\right) \Delta x\left( \tau \left( r\right) \right) \notag \\ & =a\left( t-1\right) x\left( \tau \left( t\right) \right) -a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \prod_{s=t_{0}}^{t-1}\left( 1-B(s\right) )-\sum_{r=t_{0}}^{t-1}\Delta \left[ \prod_{s=r}^{t-1}\left( 1-B(s\right) )a\left( r-1\right) \right] x\left( \tau \left( r\right) \right) . \label{8} \end{align}
(8)
But, \begin{align*} \sum_{r=t_{0}}^{t-1}\Delta \left[ \prod_{s=r}^{t-1}\left( 1-B(s\right) )a\left( r-1\right) \right] x\left( \tau \left( r\right) \right) =\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s\right) )\left[ a(r)-a\left( r-1\right) \left( 1-B(r)\right) \right] x\left( \tau \left( r\right) \right) \\ & =\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s\right) )h\left( r\right) x\left( \tau \left( r\right) \right) , \end{align*} where \(h\) is given by (5). We obtain (4) by replacing (8) into (7). Since each step is reversible, the converse follows easily. This completes the proof.

Theorem 4. Assume that \(0< B\left( t\right) < 1\) and the following conditions hold

\begin{equation} \prod\limits_{s=t_{0}}^{t-1}\left( 1-B(s)\right) \rightarrow 0\text{ as } t\rightarrow \infty , \label{9} \end{equation}
(9)
and
\begin{align} & \left\vert a(t-1)\right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert B\left( r\right) \right\vert \notag \\ & +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{i=1}^{k}\left\vert b_{i}\left( r\right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{j=1}^{l}\left\vert c_{j}\left( r\right) \right\vert\leq\beta< 1. \label{10} \end{align}
(10)
Then for each initial condition (2), every solution of (1) converges to zero.

Proof. Let \(x\in C([m\left( t_{0}\right) ,\infty )\cap \mathbb{Z})\) is the space of all bounded sequences and \( M=\{x\in C([m\left( t_{0}\right) ,\infty )\cap \mathbb{Z}):x(t)\rightarrow 0 \text{ as }t\rightarrow \infty \}, \) be a closed subspace. Then \((M,\left\Vert .\right\Vert )\) is a Banach space with the norm \( \left\Vert x\right\Vert =\sup_{t\geq m\left( t_{0}\right) }\left\vert x\left( t\right) \right\vert . \) Define the operator \(\phi :M\rightarrow M\) by

\begin{equation} \left( \phi x\right) \left( t\right) =\left\{ \begin{array}{l} \psi \left( t\right) ,\text{ \ \ if }m\left( t_{0}\right) \leq t\leq t_{0,} \\ \left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \prod\limits_{s=t_{0}}^{t-1}\left( 1-B(s)\right) -a(t-1)x(\tau \left( t\right) ) \\ +\sum\limits_{r=t_{0}}^{t-1}\prod\limits_{s=r+1}^{t-1}\left( 1-B(s)\right) h\left( r\right) x\left( \tau \left( r\right) \right) -\sum\limits_{r=t_{0}}^{t-1}\prod\limits_{s=r+1}^{t-1}\left( 1-B(s)\right) B\left( r\right) x\left( r+1\right) \\ -\sum\limits_{r=t_{0}}^{t-1}\prod\limits_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum\limits_{i=1}^{k}b_{i}\left( r\right) x\left( \sigma _{i}\left( r\right) \right)-\sum\limits_{r=t_{0}}^{t-1}\prod\limits_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum\limits_{j=1}^{l}c_{j}\left( r\right) x\left( \tau _{j}\left( r\right) \right) ,\text{\ }t\geq t_{0}. \end{array} \right. \label{11} \end{equation}
(11)
It is clear that for \(x\in M\), \(\phi x\) is bounded. Now, we will show that \( \phi \) is a contraction. Let \(x\) and \(y\) be two bounded sequences on \( [m\left( t_{0}\right) ,\infty )\cap \mathbb{Z}\) and satisfying same initial condition (2). Then for \(t\geq t_{0}\), we get \begin{align*} & \left\vert \left( \phi x\right) \left( t\right) -\left( \phi y\right) \left( t\right) \right\vert \leq \left\vert a(t-1)\right\vert \left\vert x\left( \tau \left( t\right) \right) -y\left( \tau \left( t\right) \right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) -y\left( \tau \left( r\right) \right) \right\vert \\ & +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert x\left( r+1\right) -y\left( r+1\right) \right\vert \left\vert B\left( r\right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{i=1}^{k}\left\vert b_{i}\left( r\right) \right\vert \left\vert x\left( \sigma _{i}\left( r\right) \right) -y\left( \sigma _{i}\left( r\right) \right) \right\vert \\ & +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{j=1}^{l}\left\vert c_{j}\left( r\right) \right\vert \left\vert x\left( \tau _{j}\left( r\right) \right) -y\left( \tau _{j}\left( r\right) \right) \right\vert \leq \beta \left\Vert x-y\right\Vert . \end{align*} Thus, the operator \(\phi \) has a unique fixed point in \(M\), which solves ( 1). Now, we will show that, \(\left( \phi x\right) \left( t\right) \rightarrow 0\) as \(t\rightarrow \infty \). Actually, for \(x\in M\), we have
\begin{eqnarray} \left\vert \left( \phi x\right) \left( t\right) \right\vert &\leq& \left\vert \left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \right\vert \prod_{s=t_{0}}^{t-1}\left( 1-B(s)\right) +\left\vert a(t-1)\right\vert \left\vert x\left( \tau \left( t\right) \right) \right\vert \notag \\ && +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert x\left( r+1\right) \right\vert \left\vert B\left( r\right) \right\vert \notag \\ && +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{i=1}^{k}\left\vert b_{i}\left( r\right) \right\vert \left\vert x\left( \sigma _{i}\left( r\right) \right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \sum_{j=1}^{l}\left\vert c_{j}\left( r\right) \right\vert \left\vert x\left( \tau _{j}\left( r\right) \right) \right\vert .\notag\\&& \label{12} \end{eqnarray}
(12)
Note that by (9), \begin{equation*} \left\vert \left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \right\vert \prod_{s=t_{0}}^{t-1}\left( 1-B(s)\right) \rightarrow 0\text{ as }t\rightarrow \infty . \end{equation*} Moreover, since \(x(t)\rightarrow 0\) as \(t\rightarrow \infty \), for each \( \varepsilon >0\), there exists \(T_{1}>t_{0}\) such that \(u\geq T_{1}\) implies that \(\left\vert x(\tau (u))\right\vert < \frac{\varepsilon }{2}\). Thus, for \( t\geq T_{1}\), the third term \(I_{3}\) in (12) satisfies \begin{align*} I_{3}& =\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) \right\vert \\&\leq \sum_{r=t_{0}}^{T_{1}-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) \right\vert +\sum_{r=T_{1}}^{t-1}\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) \right\vert\\& \leq \sum_{r=t_{0}}^{T_{1}-1}\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) \right\vert +\frac{\varepsilon }{2}\sum_{r=T_{1}}^{t-1}\sum_{r=t_{0}}^{t-1} \prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \\& \leq \frac{\varepsilon }{2}+\beta \frac{\varepsilon }{2}\\&\leq \varepsilon . \end{align*} Thus \(I_{3}\rightarrow 0\) as \(t\rightarrow \infty \). By a similar technique, we can prove that the rest of terms in (12) tend zero as \(t\rightarrow \infty \). Therefore \((\phi x)(t)\rightarrow 0\) as \(t\rightarrow \infty \). This completes the proof.

Theorem 5. Suppose that \(0< B\left( t\right) < 1\). If all solutions of (1) converge to zero, then (9) holds.

Proof. Suppose that (9) does not holds. That is,

\begin{equation} \lim_{t\rightarrow \infty }\prod\limits_{s=t_{0}}^{t-1}\left( 1-B(s)\right) =\delta \neq 0. \label{13} \end{equation}
(13)
So, from (13), we can write \(\delta \neq 0\). Then, there exists a sequence \(\left\{ t_{n}\right\} \) approaching \(\infty \), such that \begin{equation*} \prod\limits_{s=t_{0}}^{t_{n}-1}\left( 1-B(s)\right) \rightarrow \delta \text{ as }n\rightarrow \infty . \end{equation*} For \(x(t_{0})\neq 0\), let \(x\) be a solution. Then,
\begin{align} & \lim_{n\rightarrow \infty }\left\vert \left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \right\vert \prod_{s=t_{0}}^{t_{n}-1}\left( 1-B(s)\right)=\left\vert \left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \right\vert \delta \neq 0. \label{14} \end{align}
(14)
From Lemma 1, \(x(t_{n})\) satisfies (4). On the other hand, we know that
\begin{eqnarray} &&\lim_{n\rightarrow \infty }\left[ \sum_{r=t_{0}}^{t_{n}-1} \prod_{s=r+1}^{t_{n}-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert \left\vert x\left( \tau \left( r\right) \right) \right\vert -\left\vert a(t_{n}-1)\right\vert \left\vert x\left( \tau \left( t_{n}\right) \right) \right\vert \right. \notag \\ &&+\sum_{r=t_{0}}^{t_{n}-1}\prod\limits_{s=r+1}^{t_{n}-1}\left( 1-B(s)\right) \left\vert x\left( r+1\right) \right\vert \left\vert B\left( r\right) \right\vert +\sum_{r=t_{0}}^{t_{n}-1}\prod\limits_{s=r+1}^{t_{n}-1}\left( 1-B(s)\right) \sum_{i=1}^{k}\left\vert b_{i}\left( r\right) \right\vert \left\vert x\left( \sigma _{i}\left( r\right) \right) \right\vert \notag \\ &&\left. +\sum_{r=t_{0}}^{t_{n}-1}\prod_{s=r+1}^{t_{n}-1}\left( 1-B(s)\right) \sum_{j=1}^{l}\left\vert c_{j}\left( r\right) \right\vert \left\vert x\left( \tau _{j}\left( r\right) \right) \right\vert \right] \begin{array}{c} = \end{array} 0. \label{15} \end{eqnarray}
(15)
Since all solutions tend zero, from (4), (14) and (15), we get \begin{equation*} \lim_{n\rightarrow \infty }x\left( t_{n}\right) =\left\vert \left( x\left( t_{0}\right) +a\left( t_{0}-1\right) x\left( \tau \left( t_{0}\right) \right) \right) \right\vert \delta \neq 0, \end{equation*} which contradicts all solutions of (1) converge to zero. The proof is completed.

We end the paper with the following example.

Example 1. consider the mixed type neutral difference equation

\begin{equation} \Delta x\left( t\right) +a\left( t\right) \Delta x\left( \tau \left( t\right) \right) +b_{1}\left( t\right) x\left( \sigma _{1}\left( t\right) \right) +c_{1}\left( t\right) x\left( \tau _{1}\left( t\right) \right) =0, \label{4.1} \end{equation}
(16)
with an assumed initial condition \begin{equation*} x(t)=\psi (t)\text{ for }t\in \lbrack m\left( t_{0}\right) ,t_{0}]\cap \mathbb{Z}, \end{equation*} where \(t_{0}=0\), \(m\left( t_{0}\right) =-2\), \(\psi (t)=t/3\), \(a\left( t\right) =\dfrac{1}{3^{t+2}}\), \(b_{1}\left( t\right) =1-\dfrac{1}{2^{t}}\), \( c_{1}\left( t\right) =\dfrac{1}{2^{t+1}}\), \(\tau \left( t\right) =3t/2\), \( \sigma _{1}\left( t\right) =t/2-2\), \(\tau _{1}\left( t\right) =5t/2\). We have \begin{equation*} B\left( t\right) =1-\dfrac{1}{2^{t+1}},\ \prod\limits_{s=0}^{t-1}\left( 1-B(s)\right) =\prod\limits_{s=0}^{t-1}\dfrac{1}{2^{s+1}}\rightarrow 0\text{ as }t\rightarrow \infty , \end{equation*} and \begin{align*} & \left\vert a(t-1)\right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert h\left( r\right) \right\vert +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert B\left( r\right) \right\vert \\ & +\sum_{r=t_{0}}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert b_{1}\left( r\right) \right\vert +\sum_{r=0}^{t-1}\prod_{s=r+1}^{t-1}\left( 1-B(s)\right) \left\vert c_{1}\left( r\right) \right\vert \\ & =\dfrac{1}{3^{t+1}}+\sum_{r=0}^{t-1}\prod_{s=r+1}^{t-1}\dfrac{1}{2^{s+1}} \left\vert \frac{1}{3^{r+2}}-\frac{1}{3^{r+1}\times 2^{r+1}}\right\vert +\sum_{r=0}^{t-1}\prod_{s=r+1}^{t-1}\dfrac{1}{2^{s+1}}\left( 1-\dfrac{1}{ 2^{r+1}}\right) \\ & +\sum_{r=0}^{t-1}\prod_{s=r+1}^{t-1}\dfrac{1}{2^{s+1}}\left( 1-\dfrac{1}{ 2^{r}}\right) +\sum_{r=0}^{t-1}\prod_{s=r+1}^{t-1}\dfrac{1}{2^{s+1}}\times \dfrac{1}{2^{r+1}} \simeq 0.722< 1. \end{align*} Thus all the conditions of Theorem 4 are satisfied and every solution of (16) converges to zero.

3. Concluding remarks

In this article, a neutral mixed type difference equation is considered. The asymptotic behavior of solutions is obtained with a necessary and sufficient condition by using fixed point theorems. The results are supported with a suitable illustrative example.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Burton, T. A. (2013). Stability by fixed point theory for functional differential equations. Courier Corporation, New York. [Google Scholor]
  2. Burton, T. A., & Furumochi, T. (2001). Fixed points and problems in stability theory for ordinary and functional differential equations. Dynamic Systems and Applications, 10(1), 89-116. [Google Scholor]
  3. Ardjouni, A., & Djoudi, A. (2012). Fixed points and stability in nonlinear neutral differential equations with variable delays. Nonlinear Studies, 19(3), 345--357. [Google Scholor]
  4. Ardjouni, A., & Djoudi, A. (2011). Stability in nonlinear neutral differential equations with variable delays using fixed point theory. Electronic Journal of Qualitative Theory of Differential Equations, 2011(43), 1-11. [Google Scholor]
  5. Ardjouni, A., Derrardjia, I., & Djoudi, A. (2014). Stability in totally nonlinear neutral differential equations with variable delay. Acta Mathematica Universitatis Comenianae, 83(1), 119-134. [Google Scholor]
  6. Ardjouni, A., Djoudi, A., & Soualhia, I. (2012). Stability for linear neutral integro-differential equations with variable delays. Electronic journal of Differential Equations, 172(2012), 1-14. [Google Scholor]
  7. Becker, L. C., & Burton, T. A. (2006). Stability, fixed points and inverses of delays. Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 136(2), 245-275. [Google Scholor]
  8. Derrardjia, I., Ardjouni, A., & Djoudi, A. (2013). Stability by Krasnoselskii's theorem in totally nonlinear neutral differential equations. Opuscula Mathematica, 33(2), 255-272. [Google Scholor]
  9. Dung, N. T. (2015). Asymptotic behavior of linear advanced differential equations. Acta Mathematica Scientia, 35(3), 610-618. [Google Scholor]
  10. Dung, N. T. (2013). New stability conditions for mixed linear Levin-Nohel integro-differential equations. Journal of Mathematical Physics, 54(8), 082705. [Google Scholor]
  11. Elaydi, S. (1999). An Introduction to Difference Equations, Springer, New York.
  12. Elaydi, S. (1994). Periodicity and stability of linear Volterra difference systems. Journal of Mathematical Analysis and Applications, 181(2), 483-492. [Google Scholor]
  13. Elaydi, S., & Murakami, S. (1997). Uniform asymptoic stability in linear volterra difference equations. Journal of Difference Equations and Applications, 3(3-4), 203-218. [Google Scholor]
  14. Eloe, P. W., Islam, M. N., & Raffoul, Y. N. (2003). Uniform asymptotic stability in nonlinear Volterra discrete systems. Computers \& Mathematics with Applications, 45(6-9), 1033-1039. [Google Scholor]
  15. Islam, M. N., & Raffoul, Y. N. (2003). Exponential Stability in Non-linear Difference Equations. Journal of Difference Equations and Applications, 9(9), 819-825. [Google Scholor]
  16. Raffoul, Y. N. (2003). General theorems for stability and boundedness for nonlinear functional discrete systems. Journal of mathematical analysis and applications, 279(2), 639-650. [Google Scholor]
  17. Ardjouni, A., & Djoudi, A. (2015). Stability in nonlinear neutral difference equations. Afrika Matematika, 26(3-4), 559-574. [Google Scholor]
  18. Ardjouni, A., & Djoudi, A. (2015). Asymptotic stability in totally nonlinear neutral difference equations. Proyecciones (Antofagasta), 34(3), 255-276. [Google Scholor]
  19. Ardjouni, A., & Djoudi, A. (2014). Stability in nonlinear neutral integro-differential equations with variable delay using fixed point theory. Journal of Applied Mathematics and Computing, 44(1-2), 317-336. [Google Scholor]
  20. Ardjouni, A., & Djoudi, A. (2013). Stability in linear neutral difference equations with variable delays. Mathematica Bohemica, 138(3), 245-258. [Google Scholor]
  21. Ardjouni, A., & Djoudi, A. (2013). Stability in nonlinear neutral difference equations with variable delays. TJMM, 5(1), 01-10. [Google Scholor]
  22. Ardjouni, A., & Djoudi, A. (2012). Fixed points and stability in neutral nonlinear differential equations with variable delays. Opuscula Mathematica, 32(1), 5-19. [Google Scholor]
  23. Islam, M., & Yankson, E. (2005). Boundedness and stability in nonlinear delay difference equations employing fixed point theory. Electronic Journal of Qualitative Theory of Differential Equations, 2005(26), 1-18. [Google Scholor]
  24. Kelley, W. G., & Peterson, A. C. (2001). Difference equations: an introduction with applications. Academic press. [Google Scholor]
  25. Bicer, E. (2018). On the Asymptotic Behavior of Solutions of Neutral Mixed Type Differential Equations. Results in Mathematics, 73(4), 1-12. [Google Scholor]
  26. Mesmouli, M. B., Ardjouni, A., & Djoudi, A. (2014). Study of the stability in nonlinear neutral differential equations with functional delay using Krasnoselskii–Burton’s fixed-point. Applied Mathematics and Computation, 243, 492-502. [Google Scholor]
  27. Mesmouli, M. B., Ardjouni, A., & Djoudi, A. (2014). Stability in neutral nonlinear differential equations with functional delay using Krasnoselskii-Burton's fixed-point. Nonlinear Studies, 21(4), 601--617. [Google Scholor]
  28. Raffoul, Y. N. (2006). Stability and periodicity in discrete delay equations. Journal of mathematical analysis and applications, 324(2), 1356-1362. [Google Scholor]
  29. Raffoul, Y. N. (2004). Periodicity in general delay non-linear difference equations using fixed point theory. Journal of Difference Equations and Applications, 10(13-15), 1229-1242. [Google Scholor]
  30. Yankson, E. (2009). Stability in discrete equations with variable delays. Electronic Journal of Qualitative Theory of Differential Equations, 2009(8), 1-7. [Google Scholor]
  31. Yankson, E. (2006). Stability of Volterra difference delay equations. Electronic Journal of Qualitative Theory of Differential Equations, 2006(20), 1-14. [Google Scholor]
  32. Zhang, B. (2005). Fixed points and stability in differential equations with variable delays. Nonlinear Analysis: Theory, Methods & Applications, 63(5-7), e233-e242. [Google Scholor]
  33. Smart, D. R. (1974). Fixed point theorems, Cambridge Tracts in Mathematics. Cambridge University Press, London-New York. [Google Scholor]
]]>
On a hyper-singular equation https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/on-a-hyper-singular-equation/ Wed, 18 Mar 2020 08:06:13 +0000 https://old.pisrt.org/?p=3841
OMA-Vol. 4 (2020), Issue 1, pp. 8 - 10 Open Access Full-Text PDF
Alexander G. Ramm
Abstract: The equation \(v=v_0+\int_0^t(t-s)^{\lambda -1}v(s)ds\) is considered, \(\lambda\neq 0,-1,-2...\) and \(v_0\) is a smooth function rapidly decaying with all its derivatives. It is proved that the solution to this equation does exist, is unique and is smoother than the singular function \(t^{-\frac 5 4}\).
]]>

Open Journal of Mathematical Analysis

On a hyper-singular equation

Alexander G. Ramm
Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA.; ramm@math.ksu.edu

Abstract

The equation \(v=v_0+\int_0^t(t-s)^{\lambda -1}v(s)ds\) is considered, \(\lambda\neq 0,-1,-2…\) and \(v_0\) is a smooth function rapidly decaying with all its derivatives. It is proved that the solution to this equation does exist, is unique and is smoother than the singular function \(t^{-\frac 5 4}\).

Keywords:

Hyper-singular equation.

1. Introduction and formulation of the result

Let
\begin{equation}\label{e1} v(t)=v_0(t)+\int_0^t (t-s)^{\lambda -1}v(s)ds, \quad \lambda\neq 0,-1,-2,.... \end{equation}
(1)
where \(v_0\) is a smooth functions rapidly decaying with all its derivatives as \(t\to \infty\), \(v_0(t)=0\) if \(t< 0\). The integral in (1) diverges in the classical sense. Our result can be formulated as follows.

Theorem 1. The solution to equation (1) for \(\lambda=-\frac1 4\) does exist, is unique and less singular than \(t^{-\frac 5 4}\) as \(t\to 0\).

Proof. We define the integral in (1) as a convolution of the distribution \(t^{\lambda-1}\) and \(v\). The space of the test functions for this distribution is the space \(\mathcal{K}:=C^\infty_0(R_+)\) of compactly supported on \(R_+:=[0, \infty)\) infinitely differentiable functions \(\phi(t)\) defined on \(R_+:=[0,\infty)\). The topology on this space is defined by countably many norms \(\sup_{t\ge 0}t^m|D^p\phi(t)|\). A sequence \(\phi_n(t)\) converges to \(\phi(t)\) in \(\mathcal{K}\) if and only if all the functions \(\phi_n(t)\) have compact support on an interval \([a,b]\), \(a>0\), \(b< \infty\) and \(\phi_n\) converges on this interval to \(\phi\) in every of the above norms. Let us check that \(t^{\lambda-1}:=t_+^{\lambda-1}\) is a distribution on \(\mathcal{K}\) for \(\lambda< 0\), i.e., a linear bounded functional on \(\mathcal{K}\). Let \(\phi_n\in \mathcal{K}\) and \(\phi_n \to \phi\) in \(\mathcal{K}\). If \(\lambda< 0\) then \(\max_{t\in [a,b]}t^{\lambda -1}\le a^{\lambda-1}+b^{\lambda-1}\). Thus, $$|\int_0^\infty t^{\lambda-1}\phi_n(t)dt|\le [a^{\lambda-1}+b^{\lambda-1}] \int_0^\infty |\phi_n(t)|dt,$$ where \(a>0\) and \(b< \infty\). Since \(\phi_n \to \phi\) in \(\mathcal{K}\), we have $$\int_0^\infty |\phi_n(t)|dt\to \int_a^b | \phi|dt.$$ So, the integral \(\int_0^\infty t^{\lambda-1}\phi(t)dt\) is a bounded linear functional on \(\mathcal{K}\) and \(t^{\lambda-1}\) is a distribution on the set of the test functions \(\mathcal{K}\) for \(\lambda\neq 0,-1,-2,...\). The integral in (1) is the convolution \(t^{\lambda-1}\star v\). This convolution is defined for any distributions on the dual to \(\mathcal{K}\) space \(\mathcal{K}'\). This is done in [3], p.57. For another space of the test functions \(K=C^\infty_0(R)\) this is done in [2], p.135. It is known, see e.g. [1], p.39, that

\begin{equation}\label{e2a} L(f\star h)=L(f)L(h), \end{equation}
(2)
where \(L\) is the Laplace transform, and \(f,h\) are distributions on \(\mathcal{K}\). Let us calculate \(L(t^{\lambda-1})\) using the new variable \(s=pt\):
\begin{equation}\label{e2} L(t^{\lambda-1})=\int_0^\infty t^{\lambda-1}e^{-pt}dt=\int_0^\infty s^{\lambda-1}e^{-s}ds p^{-\lambda}=\Gamma(\lambda)p^{-\lambda},\quad \lambda\neq 0,-1,-2... \end{equation}
(3)
This formula is valid classically for \(Re \lambda>0\). By analytic continuation with respect to \(\lambda\) it is valid for all complex \(\lambda\neq 0,-1,-2,....\). Applying the Laplace transform to (1) and using formulas (2) and (3), one gets
\begin{equation}\label{e3} L(v)=L(v_0)+\Gamma(\lambda)p^{-\lambda}L(v). \end{equation}
(4)
Let us assume that \(\lambda=-\frac 1 4\) so that \(\lambda-1=-\frac 5 4\). This value appears in the solution to the Navier-Stokes problem in \(R^3\), see [3], p.53. If \(\lambda=-\frac 1 4\), then equation (4) yields
\begin{equation}\label{e4} L(v)=\frac{L(v_0)}{1+4\Gamma(3/4)p^{1/4}}, \end{equation}
(5)
where we have used the relation \(\Gamma(-\frac 1 4)=-4\Gamma(3/4)\), which follows from the known formula \(\Gamma(z+1)=z\Gamma(z)\) with \(z=-\frac 1 4\). Thus,
\begin{equation}\label{e5} v=L^{-1}\frac{L(v_0)}{1+4\Gamma(3/4)p^{1/4}}. \end{equation}
(6)
So, the solution \(v\) does exist and is unique. Moreover, \(v\) is not a distribution if \(v_0\) is smooth and rapidly decaying when \(t\to \infty\). This follows from the known results concerning the relation of asymptotic of \(L(f)(p)\) and \(f(t)\) for \(p\to \infty\) and \(t\to 0\) and for \(p\to 0\) and \(t\to \infty\), see [1], p.41. Namely, if \(f(t)\sim At^{\nu}\) as \(t\to 0\), then \(L(f)(p)\sim A\Gamma(\nu +1)p^{-\nu -1}\) as \(p\to \infty\), \(\nu\neq -1.-2,....\). If \(f(t)\sim At^{\nu}\) as \(t\to \infty\) then \(L(f)(p)\sim A\Gamma(\nu +1)p^{-\nu -1}\) as \(p\to 0\). Since \(p^{1/4}\to 0\) as \(p\to 0\), the asymptotic of \(v(t)\) as \(t\to \infty\) is of the same order as that of \(v_0\). As \(p\to \infty\) the singularity of \(v(t)\) as \(t\to 0\) is of the order less than that of \(t^{-\frac 5 4}\). For example, assume that \(v_0\) is continuous as \(t\to 0\). Then we can take \(\nu\ge 0\). Consider the worst case \(\nu=0\). In this case \(L(v)(p)\) is of the order \(p^{-1-\frac 1 4}=p^{-\frac 5 4}\). Therefore \(v\sim t^{-\frac 1 4}\) as \(t\to 0\). This is an integrable singularity. Thus, \(v\) is less singular as \(t\to 0\) than the distribution \(t^{-\frac 5 4}\). Theorem 1 is proved.

In [4] another result, similar to the one in this paper, is proved. In Zbl 07026037 in a review of paper [5] there is an erroneous claim that the proof in [5] is incorrect. The reviewer erroneously claims that the integral (1) diverges and therefore it is equal to infinity. While this is true classically it is not true in the sense of distributions. Therefore, the claim of the reviewer that the proof in [5] is not correct is false. The reviewer claims that \(\Phi_{-\frac 1 4}\) is not equal to \(\frac {t_+^{-\frac 5 4}}{\Gamma(-\frac 1 4)}\). This is not true if the space of the test functions is \(\mathcal{K}\) (although it is true if the space of the test functions is \(K\)).

2. Concluding remark.

Historically it is well known that equation (1) can be solved explicitly by the Laplace transform if \(\lambda>0\) and the function \(1-L(t^{\lambda -1})\neq 0\). To our knowledge, for \(\lambda< 0\) there were no results concerning the solvability of equation (1). The author got interested in (1) in the case \(\lambda=-\frac 1 4\) in connection with the millennium problem about unique global solvability of the Navier-Stokes problem (NSP) in \(R^3\) which was solved in [5], see also [3] Chapter 5.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Brychkov, Y. A., & Prudnikov, A. P. (1977). Integral transforms of generalized functions, Nauka, Moscow, (in Russian). [Google Scholor]
  2. Gel'fand, I. M., & Shilov, G. E. (1959). Generalized functions. Vol. 1, GIFML, Moscow, (in Russian).[Google Scholor]
  3. Ramm, A. G. (2019). Symmetry Problems. The Navier-Stokes Problem, Morgan & Claypool Publishers, San Rafael, CA. [Google Scholor]
  4. Ramm, A. G. (2018). Existence of the solutions to convolution equations with distributional kernels. Global Journal of Math. Analysis, 6(1), 1-2.[Google Scholor]
  5. Ramm, A. G. (2019). Solution of the Navier-Stokes problem. Applied Mathematics Letters , 87, 160-164. [Google Scholor]
]]>
A unified integral operator and further its consequences https://old.pisrt.org/psr-press/journals/oma-vol-4-issue-1-2020/a-unified-integral-operator-and-further-its-consequences/ Sun, 09 Feb 2020 09:44:31 +0000 https://old.pisrt.org/?p=3748
OMA-Vol. 4 (2020), Issue 1, pp. 1 - 7 Open Access Full-Text PDF
Ghulam Farid
Abstract: The aim of this paper is to construct left sided and right sided integral operators in a unified form. These integral operators produce various well known integral operators in the theory of fractional calculus. Formulated integral operators of this study include generalized fractional integral operators of Riemann-Liouville type and operators containing Mittag-Leffler functions in their kernels. Also boundedness of all these fractional integral operators is derived from the boundedness of unified integral operators. The existence of new integral operators may have useful consequences in applied sciences besides in fractional calculus.
]]>

Open Journal of Mathematical Analysis

A unified integral operator and further its consequences

Ghulam Farid
COMSATS University Islamabad, Attock Campus, Pakistan.; ghlmfarid@ciit-attock.edu.pk

Abstract

The aim of this paper is to construct left sided and right sided integral operators in a unified form. These integral operators produce various well known integral operators in the theory of fractional calculus. Formulated integral operators of this study include generalized fractional integral operators of Riemann-Liouville type and operators containing Mittag-Leffler functions in their kernels. Also boundedness of all these fractional integral operators is derived from the boundedness of unified integral operators. The existence of new integral operators may have useful consequences in applied sciences besides in fractional calculus.

Keywords:

Integral operators, fractional integral operators, bounds.

1. Introduction

We start with the following compact form of fractional integrals:

Definition 1.[1] Let \(f:[a,b]\rightarrow\mathbb{R}\) be an integrable function. Also let \(g\) be an increasing and positive function on \((a, b]\), having a continuous derivative \(g^{\prime}\) on \((a,b)\). The left-sided and right-sided fractional integrals of a function \(f\) with respect to another function \(g\) on \([a, b]\) of order \(\mu\in\mathbb{C}\,\,(\mathcal{R}(\mu) > 0)\) are defined as:

\begin{equation}\label{1.5} _{g}^{\mu}I_{a^+}f(x)=\frac{1}{\Gamma({\mu})}\int_{a}^{x}(g(x)-g(t))^{{\mu}-1}g'(t)f(t)dt,\,\, x>a \end{equation}
(1)
and
\begin{equation}\label{1.6} _{g}^{\mu}I_{b^-}f(x)=\frac{1}{\Gamma({\mu})}\int_{x}^{b}(g(t)-g(x))^{{\mu}-1}g'(t)f(t)dt,\,\ x < b, \end{equation}
(2)
where \(\Gamma(.)\) is the gamma function.

A \(k\)-fractional analogue of above definition is given as follows:

Definition 2.[2] Let \(f:[a,b]\rightarrow\mathbb{R}\) be an integrable function. Also let \(g\) be an increasing and positive function on \((a, b]\), having a continuous derivative \(g^{\prime}\) on \((a,b)\). The left-sided and right-sided \(k\)-fractional integral operators, \(k>0\) of a function \(f\) with respect to another function \(g\) on \([a, b]\) of order \(\mu\in\mathbb{C}\,\,(\mathcal{R}(\mu) > 0)\) are defined as:

\begin{equation}\label{235} ^{\mu}_{g}I^{k}_{a^+}f(x)=\frac{1}{k\Gamma_k({\mu})}\int_{a}^{x}(g(x)-g(t))^{\frac{\mu}{k}-1}g'(t)f(t)dt,\,\, x>a \end{equation}
(3)
and
\begin{equation}\label{236} ^{\mu}_{g}I^{k}_{b^-}f(x)=\frac{1}{k\Gamma_k({\mu})}\int_{x}^{b}(g(t)-g(x))^{\frac{\mu}{k}-1}g'(t)f(t)dt,\,\ x < b, \end{equation}
(4)
where \begin{equation*} \Gamma_{k}(\mu)=\int_{0}^{\infty}t^{\mu-1}e^{\frac{-t^{k}}{k}}dt \end{equation*} is the \(k\)-gamma function.

Fractional integral operators which have been studied in recent decades are special cases of generalized classical integral operators of Riemann-Liouville type which are defined in (1) and (2). Instead of studying the derivations of recently defined fractional integral operators from (3) and (4), the authors of these decades have derived them independently, see [1, 3, 4, 5, 6, 7, 8]. Remark 1 provides derivations of these fractional integrals from (1) and (2) and along with their \(k\)-analogues (3) and (4). A detail study of fractional integrals associated with (3) and (4), which have been investigated by the authors of recent years, is summarized in the following remark:

Remark \label{rem1} Fractional integrals elaborated in (3) and (4) particularly produce several known fractional integrals corresponding to different settings of \(k\) and \(g\).

  1. For \(k=1\) (3) and (4) fractional integrals coincide with (1) and (2) fractional integrals.
  2. By taking \(g\) as identity function (3) and (4) fractional integrals coincide with \(k\)-fractional Riemann-Liouville integrals defined by Mubeen et al. in [7].
  3. For \(k = 1\), along with \(g\) as identity function (3) and (4) fractional integrals coincide with Riemann-Liouville fractional integrals [1].
  4. For \(k = 1\) and \(g(x)=\frac{x^\rho}{\rho}\), \(\rho>0\), (3) and (4) produce Katugampola fractional integrals defined by Chen et al. in [3].
  5. For \(k = 1\) and \(g(x)=\frac{x^{\tau+s}}{\tau+s}\) , (3) and (4) produce generalized conformable fractional integrals defined by Khan et al. in [6].
  6. If we take \(g(x)=\frac{(x-a)^s}{s}\), \(s>0\) in (3) and \(g(x)=-\frac{(b-x)^s}{s}\), \(s>0\) in (4), then conformable \((k,s)\)-fractional integrals are achieved as defined by Habib et al. in [4].
  7. If we take \(g(x)=\frac{x^{1+s}}{1+s}\), then conformable fractional integrals are achieved as defined by Sarikaya et al. in [8].
  8. If we take \(g(x)=\frac{(x-a)^s}{s}\), \(s>0\) in (3) and \(g(x)=-\frac{(b-x)^s}{s}\), \(s>0\) in (4) with \(k=1\), then conformable fractional integrals are achieved as defined by Jarad et al. in [5].

Moreover, the authors also have constructed various fractional integral operators by using Mittag-Leffler function and its generalizations. Recently, Andri\'c et al. studied an extended generalized Mittag-Leffler function and the associated fractional integral operators in [9]. The extended generalized fractional integrals defined in [9], produce all fractional integrals containing Mittag-Leffler function defined in [10, 11, 12, 13].

Definition 3.[9] Let \(\omega,\mu,\alpha,l,\gamma,c\in {C}\), \(\Re(\mu),\Re(\alpha),\Re(l)>0\), \(\Re(c)>\Re(\gamma)>0\) with \(p\geq0\), \(\delta>0\) and \(0< \nu\leq\delta+\Re(\mu)\). Let \(f\in L_{1}[a,b]\) and \(x\in[a,b].\) Then the generalized fractional integral operators \(\epsilon_{\mu,\alpha,l,\omega,a^{+}}^{\gamma,\delta,\nu,c}f \) and \(\epsilon_{\mu,\alpha,l,\omega,b^{-}}^{\gamma,\delta,\nu,c}f\) are defined by:

\begin{equation}\label{16} \left( \epsilon_{\mu,\alpha,l,\omega,a^{+}}^{\gamma,\delta,\nu,c}f \right)(x;p)=\int_{a}^{x}(x-t)^{\alpha-1}E_{\mu,\alpha,l}^{\gamma,\delta,\nu,c}(\omega(x-t)^{\mu};p)f(t)dt, \end{equation}
(5)
and
\begin{equation}\label{7} \left( \epsilon_{\mu,\alpha,l,\omega,b^{-}}^{\gamma,\delta,\nu,c}f \right)(x;p)=\int_{x}^{b}(t-x)^{\alpha-1}E_{\mu,\alpha,l}^{\gamma,\delta,\nu,c}(\omega(t-x)^{\mu};p)f(t)dt, \end{equation}
(6)
where
\begin{equation}\label{7^} E_{\mu,\alpha,l}^{\gamma,\delta,\nu,c}(t;p)= \sum\limits_{n=0}^{\infty}\frac{\beta_{p}(\gamma+n\nu,c-\gamma)}{\beta(\gamma,c-\gamma)} \frac{(c)_{n\nu}}{\Gamma(\mu n +\alpha)} \frac{t^{n}}{(l)_{n \delta}}\end{equation}
(7)
is the extended generalized Mittag-Leffler function, \begin{equation*} \beta_{p}(x,y)=\int_{0}^{1}t^{x-1}(1-t)^{y-1}e^{-\frac{p}{t(1-t)}}dt \end{equation*} and \((c)_{n\nu}=\frac{\Gamma(c+n\nu)}{\Gamma(c)}\).

Remark 2, The settings of \(\omega,\nu,\delta, l,p, \gamma\) into generalized Mittag-Leffler function obtain the following consequences:

  1. Setting \(p=0\), (5) and (6) reduce to the fractional integral operators defined by Salim-Faraj in [10].
  2. Setting \(l=\delta=1\), (5) and (6) reduce to the fractional integral operators defined by Rahman et al. in [11].
  3. Setting \(p=0\) and \(l=\delta=1\), (5) and (6) reduce to the fractional integral operators defined by Shukla-Prajpati in [12].
  4. Setting \(p=0\) and \(l=\delta= \nu=1\), (5) and (6) reduce to the fractional integral operators defined by Prabhakar in [13].
  5. Setting \(p=\omega=0\), (5) and (6) reduce to the left-sided and right-sided Riemann-Liouville fractional integrals.

The extended generalized Mittag-Leffler function \(E_{\mu,\alpha,l}^{\gamma,\delta,\nu,c}(t;p)\) is absolutely convergent for \(\nu< \delta+\Re(\mu)\), (see [9]). If \(S\) is the sum of series of absolute terms of the Mittag-Leffler function \(E_{\mu,\alpha,l}^{\gamma,\delta,\nu,c}(t;p)\), then one can obtain \(\left|E_{\mu,\alpha,l}^{\gamma,\delta,\nu,c}(t;p)\right|\leq S\). This property of absolutely convergence of the Mittag-Leffler function is used in establishing Theorems 6, 7, 8. The rest of the paper is organized as follows:
In Section 2, we establish the existence of integral operators in a unified form. The bounds of these unified integral operators are also obtained. It is important to note that the unified integral operators produce almost all Riemann-Liouville type fractional integral operators as well as fractional integral operators containing the Mittag-Leffler function in their kernels. Furthermore, the bounds of all these fractional integral operators are obtained in Section 2 and Section 3, from the bounds which have been established for unified integral operators.

2. Existence of new unified integral operators

The first result provides the existence of new integral operators with upper bounds in variable form.

Theorem 4. Let \(f:[a,b]\longrightarrow \mathbb{R}\), \(0< a< b\), be a positive and integrable function, \(g:[a,b]\longrightarrow \mathbb{R}\) be differentiable and strictly increasing. Also let \(\frac{\phi}{x}\) be an increasing function on \([a,b]\) and \(\omega,\alpha,l,\gamma,c\in \mathbb{C}\), \(\Re(\alpha),\Re(l)>0\), \(\Re(c)>\Re(\gamma)>0\) with \(p\geq0\), \(\mu,\delta>0\) and \(0< \nu\leq\delta+\mu\). Then for \(x\in[a,b]\), we have

\begin{equation}\label{e1} \int_{a}^{x}\frac{\phi(g(x)-g(t))}{g(x)-g(t)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l, }(\omega(g(x)-g(t))^{\mu}; p)g'(t)f(t)dt\leq\phi(g(x)-g(a)) E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l, }(\omega(g(x)-g(a))^{\mu}; p)\lVert f\rVert_{[a,x]} \end{equation}
(8)
and \begin{equation}\label{e2} \int_{x}^{b}\frac{\phi(g(t)-g(x))}{g(t)-g(x)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l,}(\omega(g(t)-g(x))^{\mu}; p)g'(t)f(t)dt\leq \phi(g(b)-g(x)) E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l,}(w(g(b)-g(x))^{\mu}; p)\lVert f\rVert_{[x,b]}\nonumber \end{equation} where \(||f||_{[a,x]}=\sup\limits_{t\in[a,x]}|f(t)|\) and \(||f||_{[x,b]}=\sup\limits_{t\in[x,b]}|f(t)|\).

Proof. As \(g\) is increasing, therefore for \(t\in[a,x);\,x\in[a,b]\), \(g(x)-g(t)\leq g(x)-g(a)\). The function \(\frac{\phi}{x}\) is increasing, therefore one can obtain:

\begin{equation}\label{3} \dfrac{\phi(g(x)-g(t))}{g(x)-g(t)}\leq\dfrac{\phi(g(x)-g(a))}{g(x)-g(a)}. \end{equation}
(9)
It is given that \(f\) is positive and \(g\) is differentiable and increasing. Therefore from (9), the following inequality is valid:
\begin{equation}\label{3*} \dfrac{\phi(g(x)-g(t))}{g(x)-g(t)}g'(t)f(t)\leq\dfrac{\phi(g(x)-g(a))}{g(x)-g(a)}g'(t)f(t). \end{equation}
(10)
Multiplying (10) by \( E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l,}(\omega(g(x)-g(t))^{\mu}; p)\), then integrating over \([a,x]\), we obtain
\begin{align}\label{f*} &\int_{a}^{x}\frac{\phi(g(x)-g(t))}{g(x)-g(t)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(x)-g(t))^{\mu}; p)g'(t)f(t)dt\nonumber\\&\leq \frac{\phi(g(x)-g(a))}{g(x)-g(a)} \int_{a}^{x}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(x)-g(t))^{\mu}; p)g'(t)f(t)dt \end{align}
(11)
\begin{align}\label{f1} &\int_{a}^{x}\frac{\phi(g(x)-g(t))}{g(x)-g(t)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(x)-g(t))^{\mu}; p)g'(t)f(t)dt\nonumber\\&\leq \frac{\phi(g(x)-g(a))}{g(x)-g(a)} \sum_{n=0}^{\infty} \frac{\beta_{p}(\gamma+n\nu, c-\gamma)}{\beta(\gamma, c-\gamma)}\frac{(c)_{n\nu}}{\Gamma(\mu n+\alpha)(l)_{n\delta}} \lVert f \rVert _{[a, x]}\int_{a}^{x}\omega^{n}(g(x)-g(t))^{\mu n}g'(t)dt. \end{align}
(12)
From which by solving integral and using the fact \(\frac{1}{\mu n+1} \leq 1 \), inequality (8) can be obtained. Now on the other hand for \(t\in(x,b]\) and \(x\in[a,b]\), the following inequality holds true:
\begin{equation}\label{4*} \dfrac{\phi(g(t)-g(x))}{(g(t)-g(x))}g'(t)f(t)\leq\dfrac{\phi(g(b)-g(x))}{(g(b)-g(x))}g'(t)f(t). \end{equation}
(13)
Multiplying (13) by \( E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(t)-g(x))^{\mu}; p)\), then integrating over \((x,b]\), we obtain
\begin{align}\label{h*} &\int_{x}^{b}\frac{\phi(g(t)-g(x))}{g(t)-g(x)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(t)-g(x))^{\mu}; p)g'(t)f(t)dt\nonumber\\&\leq \frac{\phi(g(b)-g(x))}{g(b)-g(x)} \int_{x}^{b}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(t)-g(x))^{\mu}; p)g'(t)f(t)dt. \end{align}
(14)
From which the inequality (9) can be obtained.

From above theorem, we are motivated to give the definition of a new unified two-sided integral operator as follows:

Definition 5. Let \(f,g:[a,b]\longrightarrow \mathbb{R}\), \(0< a< b\), be the functions such that \(f\) be positive and \(f\in L_{1}[a,b]\), and \(g\) be differentiable and strictly increasing. Also let \(\frac{\phi}{x}\) be an increasing function on \([a,\infty)\) and \(\omega,\alpha,l,\gamma,c\in \mathbb{C}\), \(\Re(\alpha),\Re(l)>0\), \(\Re(c)>\Re(\gamma)>0\) with \(p\geq0\), \(\mu,\delta>0\) and \(0< \nu\leq\delta+\mu\). Then for \(x\in[a,b]\) the left and right integral operators are defined by

\begin{equation}\label{a} (_gF_{\mu, \alpha, l, {a^+}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)=\int_{a}^{x}\dfrac{\phi(g(x)-g(t))}{g(x)-g(t)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(x)-g(t))^{\mu}; p)g'(t)f(t)dt \end{equation}
(15)
and
\begin{equation}\label{b} (_gF_{\mu, \alpha, l, {b^-}}^{\phi, \gamma, \delta, \nu, c}f)(x;p) =\int_{x}^{b}\dfrac{\phi(g(t)-g(x))}{g(t)-g(x)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(t)-g(x))^{\mu}; p)g'(t)f(t)dt. \end{equation}
(16)
The upcoming theorem provides the boundedness of the unified integral operators.

Theorem 6. Under the assumption of Theorem 4, the following bounds hold for integral operators (15) and (16):

\begin{equation}\label{o} \left|(_gF_{\mu, \alpha, l, {a^+}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)\right|\leq S\left|\phi(g(b)-g(a))\right|\lVert f\lVert_{[a,b]} \end{equation}
(17)
and
\begin{equation}\label{p} \left|(_gF_{\mu, \alpha, l, {b^-}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)\right|\leq S\left|\phi(g(b)-g(a))\right|\lVert f\lVert_{[a,b]}. \end{equation}
(18)
Hence
\begin{equation}\label{op} \left|(_gF_{\mu, \alpha, l, {a^+}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)+(_gF_{\mu, \alpha, l, {b^-}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)\right|\leq 2S\left|\phi(g(b)-g(a))\right|\lVert f\lVert_{[a,b]}, \end{equation}
(19)
where \(S\) is the sum of absolute terms of series in (7) and \(||f||_{[a,b]}=\sup\limits_{t\in[a,b]}|f(t)|\).

Proof. From (11), it can be obtained \begin{equation*} \left|\int_{a}^{x}\frac{\phi(g(x)-g(t))}{g(x)-g(t)}E^{\gamma, \delta, \nu, c}_{\mu, \alpha, l}(\omega(g(x)-g(t))^{\mu}; p)g'(t)f(t)dt\right|\leq S\left| \frac{\phi(g(x)-g(a))}{g(x)-g(a)}\right| \int_{a}^{x}\left|g'(t)f(t)\right|dt\nonumber \end{equation*} by simplifying the above inequality we get \begin{equation*} \left|(_gF_{\mu, \alpha, l, {a^+}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)\right|\leq S\left|\frac{\phi(g(x)-g(a))}{g(x)-g(a)}\right|\lVert f\lVert_{[a,x]}(g(x)-g(a)). \end{equation*} As \(g(x)-g(a)\leq g(b)-g(a)\), therefore

\begin{equation} \frac{\phi(g(x)-g(a))}{g(x)-g(a)}\leq \frac{\phi(g(b)-g(a))}{g(b)-g(a)} \end{equation}
(20)
and hence (17) can be achieved. Similarly from (14), one can obtain \begin{equation*} \left|(_gF_{\mu, \alpha, l, {b^-}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)\right|\leq S\left|\frac{\phi(g(b)-g(x))}{g(b)-g(x)}\right|\lVert f\lVert_{(x,b]}(g(b)-g(x)) \end{equation*} and hence (18) can be achieved. By using (17) and (18), the inequality in (19) can be obtained.

Integral operators defined in (15) and (16) have connection with fractional integral operators given in Section \ref{sec1}. The upcoming section describes the connection of these integral operators with fractional integral operators (5) and (6) and their bounds are computed.

3. Bounds of fractional integral operators containing Mittag-Leffler functions

In this whole section, we set the function \(\phi(x)=x^\alpha,\alpha>0\) and the function \(g(x)=I(x)\), where \(I\) denotes the identity function. In this case bounds of fractional integral operators containing Mittag-Leffler functions defined in [1, 3, 4, 5, 6, 7, 8] can be obtained at once from bounds of unified integral operators (15) and (16). As an example the bounds of fractional integral operators defined by Andri\'c et al. in [9]and} are obtained. Computation of rest of the bounds of related fractional integrals described in Remark 2 are left for the reader.

Theorem 7. The fractional integral operators of function \(f\) defined in (5) and (6) are bounded for \(\alpha>1\), further the following inequality holds:

\begin{equation}\label{ooo} \left|\left( \epsilon_{\mu,\alpha,l,\omega,a^{+}}^{\gamma,\delta,\nu,c}f \right)(x;p)+\left( \epsilon_{\mu,\alpha,l,\omega,b^{-}}^{\gamma,\delta,\nu,c}f \right)(x;p)\right|\leq 2S\left|b^\alpha-a^\alpha\right|\lVert f\lVert_{[a,b]},\alpha>1. \end{equation}
(21)

Proof. Let \(g(x)=x\) and \(\phi(x)=x^\alpha\). Then \(\frac{\phi}{x}\) is increasing for \(\alpha>1\). Therefore \begin{equation*} (_gF_{\mu, \alpha, l, {a^+}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)=(_IF_{\mu, \alpha, l, {a^+}}^{x^{\alpha}, \gamma, \delta, \nu, c}f)(x;p):=\left( \epsilon_{\mu,\alpha,l,\omega,a^{+}}^{\gamma,\delta,\nu,c}f \right)(x;p) \end{equation*} and \begin{equation*}(_gF_{\mu, \alpha, l, {b^-}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)=(_IF_{\mu, \alpha, l, {b^-}}^{x^{\alpha}, \gamma, \delta, \nu, c}f)(x;p):=\left( \epsilon_{\mu,\alpha,l,\omega,b^{-}}^{\gamma,\delta,\nu,c}f \right)(x;p). \end{equation*} Thus

\begin{equation}\label{oo} \left|\left( \epsilon_{\mu,\alpha,l,\omega,a^{+}}^{\gamma,\delta,\nu,c}f \right)(x;p)\right|\leq S\left|b^\alpha-a^\alpha\right|\lVert f\lVert_{[a,b]} \end{equation}
(22)
and
\begin{equation}\label{pp} \left|\left( \epsilon_{\mu,\alpha,l,\omega,b^{-}}^{\gamma,\delta,\nu,c}f \right)(x;p)\right|\leq S\left|b^\alpha-a^\alpha\right|\lVert f\lVert_{[a,b]}. \end{equation}
(23)
Hence boundedness of fractional operators (5) and (6) is followed and (21) can be achieved.

Remark 3. By using (22) and (23), the boundedness of all fractional integrals containing Mittag-Leffler functions compiled in Remark 2 can be obtained.

Bounds of fractional integral operators associated with generalized \(k\)-fractional integrals

In this whole section, we set the function \(\phi(x)=\frac{x^{\frac{\beta}{k}}}{k\Gamma_k({\beta})},\beta,k>0\). In this case bounds of fractional integral operators defined in [10, 11, 12, 14] can be obtained at once from bounds of unified integral operators (15) and (16). As an example the bounds of fractional integral operators (3) and (4) are obtained. Computation of the rest of the bounds of related fractional integrals described in Remark 1 are left for the reader.

Theorem 8.\label{th4} The generalized fractional integral operators of function \(f\) defined in (3) and (4) are bounded for \(\beta>k\), further the following inequality holds:

\begin{equation}\label{oooo} \left|^{\beta}_{g}I^{k}_{a^+}f(x)+\,^{\beta}_{g}I^{k}_{b^-}f(x)\right|\leq \frac{2S}{k\Gamma_k({\beta})}\left|g(b)^\beta-g(a)^\beta\right|\lVert f\lVert_{[a,b]},\beta>k. \end{equation}
(24)

Proof. Let \(\phi(x)=\frac{x^{\frac{\beta}{k}}}{k\Gamma_k({\beta})},\beta,k>0\) and \(p=\omega=0\). Then \(\frac{\phi}{x}\) is increasing for \(\beta>k\). Therefore \begin{equation*} (_gF_{\mu, \alpha, l, {a^+}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)=(_gF_{\mu, \alpha, l, {a^+}}^{\frac{x^{\frac{\beta}{k}}}{k\Gamma_k({\beta})}, \gamma, \delta, \nu, c}f)(x;0):=\,^{\beta}_{g}I^{k}_{a^+}f(x) \end{equation*} and \begin{equation*} (_gF_{\mu, \alpha, l, {b^-}}^{\phi, \gamma, \delta, \nu, c}f)(x;p)=(_gF_{\mu, \alpha, l, {b^-}}^{\frac{x^{\frac{\beta}{k}}}{k\Gamma_k({\beta})}, \gamma, \delta, \nu, c}f)(x;0):=\,^{\beta}_{g}I^{k}_{b^-}f(x). \end{equation*} Thus

\begin{equation}\label{1oo} \left|^{\beta}_{g}I^{k}_{a^+}f(x)\right|\leq \frac{S}{k\Gamma_k({\beta})}\left|g(b)^\beta-g(a)^\beta\right|\lVert f\lVert_{[a,b]} \end{equation}
(25)
and
\begin{equation}\label{1pp} \left|^{\beta}_{g}I^{k}_{b^-}f(x)\right|\leq \frac{S}{k\Gamma_k({\beta})}\left|g(b)^\beta-g(a)^\beta\right|\lVert f\lVert_{[a,b]}. \end{equation}
(26)
Hence boundedness of fractional operators (3) and (4) is followed and (24) can be achieved.

Remark 4. By using (25) and (26), the boundedness of all fractional integrals compiled in Remark 1 can be obtained. Also by setting \(g(x)=I(x)\) in (22) and (23) integral operators defined in [14]se} can be obtained and using Theorem 6 bounds of these integral operators can be achieved.

5. Concluding remarks

The aim of this study is to develop unified integral operators which provide the fractional integral operators of Riemann-Liouville type as well as fractional integral operators containing Mittag-Leffler functions in their kernels. The bounds of these new integral operators are computed which simultaneously provide the bounds of all fractional integral operators defined in [1, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13]. The existence of new generalized integral operators may be useful in applied sciences along with the theory and applications of fractional calculus.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Kilbas, A. A. A., Srivastava, H. M., & Trujillo, J. J. (2006). Theory and applications of fractional differential equations (Vol. 204). Elsevier Science Limited.[Google Scholor]
  2. Kwun, Y. C., Farid, G., Nazeer, W., Ullah, S., & Kang, S. M. (2018). Generalized Riemann-Liouville $ k $-Fractional Integrals Associated With Ostrowski Type Inequalities and Error Bounds of Hadamard Inequalities. IEEE Access, 6, 64946-64953. [Google Scholor]
  3. Chen, H., & Katugampola, U. N. (2017). Hermite–Hadamard and Hermite–Hadamard–Fejér type inequalities for generalized fractional integrals. Journal of Mathematical Analysis and Applications, 446(2), 1274-1291. [Google Scholor]
  4. Habib, S., Mubeen, S., & Naeem, M. N. (2018). Chebyshev type integral inequalities for generalized k-fractional conformable integrals. Journal of Inequalities and Special Functions, 9, 53-65. [Google Scholor]
  5. Jarad, F., Ugurlu, E., Abdeljawad, T., & Baleanu, D. (2017). On a new class of fractional operators. Advances in Difference Equations, 2017(1), 247. [Google Scholor]
  6. Khan, T. U., & Khan, M. A. (2019). Generalized conformable fractional operators. Journal of Computational and Applied Mathematics, 346, 378-389. [Google Scholor]
  7. Mubeen, S., & Habibullah, G. M. (2012). \(k\)-Fractional integrals and application. International Journal of Contemporary Mathematical Sciences, 7(2), 89-94. [Google Scholor]
  8. Sarikaya, M. Z., Dahmani, Z., Kiris, M. E., & Ahmad, F. (2016). \((k,s)\)-Riemann-Liouville fractional integral and applications. Hacettepe Journal of Mathematics and Statistics, 45(1), 77-89. [Google Scholor]
  9. Andrić, M., Farid, G., & Pečarić, J. (2018). A further extension of Mittag-Leffler function. Fractional Calculus and Applied Analysis, 21(5), 1377-1395. [Google Scholor]
  10. Salim, T. O., & Faraj, A. W. (2012). A generalization of Mittag-Leffler function and integral operator associated with fractional calculus. Journal of Fractional Calculus and Applications, 3(5), 1-13. [Google Scholor]
  11. Rahman, G., Baleanu, D., Qurashi, M. A., Purohit, S. D., Mubeen, S., & Arshad, M. (2017). The extended Mittag-Leffler function via fractional calculus. Journal of Nonlinear Sciences and Applications, 10, 4244-4253. [Google Scholor]
  12. Srivastava, H. M., \& Tomovski, Ž. (2009). Fractional calculus with an integral operator containing a generalized Mittag–Leffler function in the kernel. Applied Mathematics and Computation, 211(1), 198-210.[Google Scholor]
  13. Prabhakar, T. R. (1971). A singular integral equation with a generalized Mittag-Leffler function in the kernel. Yokohama Mathematical Journal, 19, 7-15. [Google Scholor]
  14. Sarikaya, M. Z., & Ertugral, F. (2017). On the generalized Hermite-Hadamard inequalities. https://www.researchgate.net/publication/321760443. [Google Scholor]
]]>