OMS – Vol 3 – 2019 – PISRT https://old.pisrt.org Tue, 07 Jan 2020 11:33:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 Pseudo-valuations and pseudo-metric on JU-algebras https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/pseudo-valuations-and-pseudo-metric-on-ju-algebras/ Mon, 16 Dec 2019 15:39:27 +0000 https://old.pisrt.org/?p=3555
OMS-Vol. 3 (2019), Issue 1, pp. 440 - 446 Open Access Full-Text PDF
Usman Ali, Moin A. Ansari, Masood Ur Rehman
Abstract: In this paper we have introduced the concept of pseudo-valuations on JU-algebras and have investigated the relationship between pseudo-valuations and ideals of JU-algebras. Conditions for a real-valued function to be a pseudo-valuation on JU-algebras are given and results based on them have been shown. We have also defined and studied pseudo-metric on JU-algebras and have proved that \(\vartheta\) being a valuation on a JU-algebras \(A\), the operation \(\diamond\) in \(A\) is uniformly continuous.
]]>

Open Journal of Mathematical Sciences

Pseudo-valuations and pseudo-metric on JU-algebras

Usman Ali, Moin A. Ansari, Masood Ur Rehman\(^1\)
Center for Advanced Studies in Pure and Applied Mathematics, Bahauddin Zakariya University, Multan, Pakistan.; (U.A)
Department of Mathematics,college of Science, Post Box 2097, Jazan University, Jazan, KSA.; (M.A.A)
Department of Mathematics, Abbottabad University of Science and Technology, Abbottabad, Pakistan.; (M.U.R)
\(^{1}\)Corresponding Author: masoodqau27@gmail.com

Abstract

In this paper we have introduced the concept of pseudo-valuations on JU-algebras and have investigated the relationship between pseudo-valuations and ideals of JU-algebras. Conditions for a real-valued function to be a pseudo-valuation on JU-algebras are given and results based on them have been shown. We have also defined and studied pseudo-metric on JU-algebras and have proved that \(\vartheta\) being a valuation on a JU-algebras \(A\), the operation \(\diamond\) in \(A\) is uniformly continuous.

Keywords:

JU-algebra, JU-ideal, valuation, pseudo-valuations, pseudo metric.

1. Introduction

Pseudo-valuations in residuated lattices was introduced by Busneag [1] where many theorems based on pseudo-valuations in lattice terms and their extension theorem for residuated lattices to pseudo-valuation from valuations are shown using the model of Hilbert algebras [2]. But in fact Pseudo-valuations on a Hilbert algebras was initially introduced by Busneag [3] where it is proved that every pseudo-valuation induces a pseudometric on a Hilbert algebra. Further Busneag [2] proved many results on extensions of pseudo-valuation.

Logical algebras have become the keen interest for researchers in recent years and intensively studied under the influence of different mathematical concepts. Doh and Kang [4] introduced the concept of pseudo-valuation on BCK/BCI algebras and studied several results based on them. Ghorbani [5] defined congruence relations and gave quotient structure of BCI-algebras based on pseudo-valuation. Zhan and Jun [6] studied pseudo valuation on \(R_{0}\)-algebras. Based on the concept of pseudo-valuation in \(R_{0}\)-algebras, Yang and Xin [7] characterized pseudo pre-valuations on EQ-algebras. Mehrshad and Kouhestani studied Pseudo-Valuations on BCK-Algebras [8]. Pseudo-valuations on a BCC-algebra was introduced by Jun et al. [9], where they have shown that binary operation in BCC-algebras is uniformly continuous. Recently Moin et al. [16] introduced JU-algebras and their \(p\)-closure ideals.

UP-algebras were introduced by Iampan [10] as a new branch of logical algebras. Naveed et. al [11] introduced the concept of cubic KU-ideals of KU-algebras. Moin and Ali [12] have given the concept of roughness in KU-Algebras recently whereas rough set theory in UP-algebras have been introduced and studied by Moin et al. [13]. Next, graph associated to UP-algebras was introduced by Moin et al. [14]. Daniel studied pseudo-valuations on UP-algebras in [15].

In this paper, we focus on pseudo-valuation which is applied to JU-algebras and discuss related results. We define pseudo-valuations on JU-algebras using the model of Busneag and introduce a pseudo-metric on JU-algebras. We also prove that the binary operation defined on JU-algebras is uniformly continuous under the induce pseudo-metric.

2. Preliminaries and basic properties of JU-algebras

In this section, we shall introduce JU-algebras, JU-subalgebras, JU-ideals and other important terminologies with examples and some related results.

Definition 1. An algebra \((A,\diamond ,1)\) of type \((2,0)\) with a single binary operation \(\diamond \) is said to be JU-algebras satisfying the following identities: for any \(u,v,w\in X,\)
\((JU_1)\) \((u\diamond v)\diamond \lbrack (v\diamond w)\diamond (u\diamond w)]=1,\)
\((JU_2)\) \(1\diamond u=u,\)
\((JU_3)\) \(u\diamond v=v\diamond u=1\) implies \(u=v.\)

We call the constant \(1\) of \(X\) the fixed element of \(X.\) For the sake of convenience, we write \(X\) instead of \((X, \diamond , 1)\) to represent a JU-algebra. We define a relation \("\leq "\) in \(X\) by \(v\leq u\) if and only if \(u\diamond v=1.\) If we add the condition \(u\diamond 1=1\) for all \(u\in X\) in the definition of JU-algebras, then we get that \(X\) is a KU-algebra. Therefore, JU-algebra is a generalization of KU-algebras.

Lemma 2. If \(X\) is a JU-algebra, then \((X, \leq )\) is a partial ordered set i.e.,
\((J_4)\) \(u\leq u,\)
\((J_5)\) \(u\leq v, v\leq u,\) implies \(u=v,\)
\((J_6)\) \(u\leq w, w\leq v,\) implies \(u\leq v.\)

Proof. Putting \(v=w=1\) in \((JU_{1})\) we get \(u\diamond u=1,\) i.e. \(u\leq u\) which proves \((J_4).\) \((J_5)\) directly follows from \((JU_3).\) For \((J_6)\) take \(u\leq w\) and \(w\leq v\) implies that \(w\diamond u=1\) and \(v\diamond w=1.\) By \((JU_1)\), we have \(v\diamond u=1\) implies that \(u\leq v.\)

Further we have the following Lemma for a JU-algebra \(X.\)

Lemma 3. If \(A\) is a JU-algebra, then following inequalities holds for any \(u,v,w\in A\):
\((J_7)\) \(u\leq v\) implies \(v\diamond w\leq u\diamond w,\)
\((J_8)\) \(u\leq v\) implies \(w\diamond u\leq w\diamond v,\)
\((J_9)\) \((w\diamond u)\diamond (v\diamond u)\leq v\diamond w,\)
\((J_{10})\) \((v\diamond u)\diamond u\leq v.\)

Proof. \((J_7),\;(J_8)\) and \((J_9)\) follows from \((JU_1)\) by adequate replacement of elements. \((J_{10})\) follows from \((JU_1)\) and Definition 1.

Next, we have the following Lemmas.

Lemma 4. Any JU-algebra \(X\) satisfies following conditions for any \(u, v, w\in A,\)
\((J_{11})\) \label{J_{11}} \(u\diamond u=1,\)
\((J_{12})\) \(w\diamond (v\diamond u)=v\diamond (w\diamond u),\)
\((J_{13})\) If \((u\diamond v)\diamond v=1,\) then \(A\) is a KU-algebra,
\((J_{14})\) \((v\diamond u)\diamond 1=(v\diamond 1)\diamond (u\diamond 1).\)

Proof. Putting \(v=w=1\) in \(JU_1,\) we get; \(u\diamond u=1\) which proves (\(J_{11}\)). For \((J_{12})\), we have \((w\diamond u)\diamond u\leq w \). By putting \(v=1\) in \((JU_1)\) and using (\(J_7\)), we get

\begin{equation}\label{EQ1} w\diamond (v\diamond u)\leq ((w\diamond u)\diamond u)\diamond (v\diamond u). \end{equation}
(1)
Replace \(w\) by \(w\diamond u\) in \((JU_1)\), we get \([v\diamond (w\diamond u)]\diamond [((w\diamond u)\diamond u)\diamond (v\diamond u)]=1\), which implies
\begin{equation}\label{EQ2} ((w\diamond u)\diamond u)\diamond (v\diamond u)\leq v\diamond (w\diamond u). \end{equation}
(2)
From (1), (2) and Lemma 2(\(J_6\)) we get,
\begin{equation}\label{EQ3} w\diamond (v\diamond u)\leq v\diamond (w\diamond u). \end{equation}
(3)
Further by replacing \(v\) with \(w\) and \(w\) with \(v\) in (3), we get
\begin{equation}\label{EQ4} v\diamond (w\diamond u)\leq w\diamond (v\diamond u). \end{equation}
(4)
Now (3), (4) and (\(J_5\)) yields, \(w\diamond (v\diamond u)=v\diamond (w\diamond u).\) In order to prove \((J_{13})\), we just needs to show that \(u\diamond 1=1, \;\; \forall \;\; u\in A.\) Replacing \(v\rightarrow 1, u\rightarrow 1, w\rightarrow u\) in \((JU_1),\) we obtained, \((1\diamond u)\diamond [(u\diamond 1)\diamond (1\diamond 1)]=1\Rightarrow u\diamond [(u\diamond 1)\diamond 1]=1\Rightarrow u\diamond 1=1\) (by using \(v=1\) in the given condition of \((J_{13})\)).
Using \((J_{12})\) for any \(u,v\in A\) in the followings we see that, \((v\diamond 1)\diamond (u\diamond 1)=(v\diamond 1)\diamond u\diamond [(v\diamond u)\diamond (v\diamond u)]=(v\diamond 1)\diamond [(v\diamond u)\diamond (u\diamond (v\diamond u))]= (v\diamond u)\diamond [(v\diamond 1)\diamond (v\diamond (u\diamond u))]= (v\diamond u)\diamond [(v\diamond 1)\diamond (v\diamond 1)]=(v\diamond u)\diamond 1,\) which shows that \((J_{14})\) holds.

Definition 5. A non-empty subset \(I\) of a JU-algebra \(A\) is called a JU-ideal of \(A\) if it satisfies the following conditions:
(1)\(\ 1\in I,\)
(2) \(\ u\diamond (v\diamond w)\in I,\) \(v\in I\) implies \(u\diamond w\in I,\) for all \(% u,v,w\in I.\)

3. Pseudo-valuations and pseudo-metric

Definition 6. A real-valued function \(\vartheta \) on a JU-algebra \(A\) is called a pseudo-valuation on \(A\) if it satisfies the following two conditions:
(1) \(\vartheta (1) = 0,\)
(2) \({\vartheta (u\diamond w)\leq \vartheta (u\diamond (v\diamond w))+\vartheta(v)}\) for all \( u, v, w\in A.\) A pseudo-valuation \(\vartheta \) on a JU-algebra \(A\) satisfying the following condition:
\(\vartheta (u)= 0\Rightarrow u=1\) for all \( u \in A\) is called a valuation on \(A\).

Example 1. Let \(A=\{1, 2, 3, 4\}\) be a set with operation \(\diamond \). A Cayley table for \(A\) is defined as%

\(\diamond \) \(1\) \(2\) \(3\) \(4\)
\(1\) \(1\) \(2\) \(3\) \(4\)
\(2\) \(1\) \(1\) \(1\) \(4\)
\(3\) \(1\) \(2\) \(1\) \(4\)
\(4\) \(1\) \(2\) \(1\) \(1\)
Here \(A\) is a JU-algebra. We find that a real valued function defined on \(A\) by \(\vartheta (1)=0,\) \(\vartheta (2)=\vartheta (3)=1,\) and \(\vartheta (4)=3,\) is a pseudo-valuation on \(A\).

Proposition 7. Let \(\vartheta \) be a pseudo-valuation on a JU-algebras \(A\). Then we have
(1) \(u\leq v\Rightarrow \vartheta (v)\leq \vartheta (u).\)
(2) \(\vartheta ((u\diamond (v\diamond w))\diamond w)\leq \vartheta (u)+\vartheta (v)\) for all \( u, v, w\in A.\)

Proof. (1) Let \(u, v\in A \) be such that \(u \leq v\). Replacing \(u=1,\) \(v=u,\) \(w=v\) in Definition 6 and Definition 1, we get \(\vartheta (v)=\vartheta (1\diamond v)\leq \vartheta (1\diamond (u\diamond v))+\vartheta(u)=\vartheta (1\diamond 1)+\vartheta(u)=\vartheta(1)+\vartheta(u)=\vartheta(u).\)
(2) If we replace \(u\) by \(u\diamond (v\diamond w)\) in Definition 6(2), then we get $$ \vartheta ((u\diamond (v\diamond w))\diamond w)\leq \vartheta ((u\diamond (v\diamond w))\diamond (v\diamond w))+\vartheta (v),$$ again applying Definition 6 (2) by choosing \(u=u\diamond (v\diamond w)\) and \(w=v\diamond w\), we get $$ \vartheta ((u\diamond (v\diamond w))\diamond w)\leq \vartheta [(u\diamond (v\diamond w))\diamond (u\diamond(v\diamond w))]+\vartheta(u)+\vartheta (v)=\vartheta(1)+\vartheta(u)+\vartheta(v)$$ $$\Rightarrow \vartheta ((u\diamond (v\diamond w))\diamond w)\leq\vartheta(u)+\vartheta(v).$$

Corollary 8. A pseudo-valuation \(\vartheta\) on a JU-algebra \(A\) satisfies the inequality \(\vartheta (u)\geq 0\) for all \(u\in A.\)

Proposition 9. If \(\vartheta \) is a pseudo-valuation on a JU-algebra \(A\), then we have \(\vartheta ((u\diamond v)\diamond v)\leq \vartheta (u)\) for all \(u, v\in A.\)

Proof. It is easy to see that the required inequality holds by considering \(v=1\) and \(w=v\) in Proposition 7(2) and using Definition 1.

Following results are devoted to find conditions for a real valued function on a JU-algebra \(A\) to be a pseudo-valuation.

Theorem 10. Let \(\vartheta \) be a real valued function on a JU-algebra \(A\) satisfying the following conditions:
(a) If \(\vartheta (a)\leq \vartheta (u)\) for all \( u\in A\), then \(\vartheta (a)=0,\)
(b) \(\vartheta (u\diamond v)\leq \vartheta (v)\) for all \( u, v\in A,\)
(c) \(\vartheta ((u\diamond (v\diamond w))\diamond w)\leq \vartheta (u)+\vartheta (v),\)
(d) \(\vartheta (v\diamond (u\diamond w))\leq \vartheta (u\diamond (v\diamond w)).\)
Then \(\vartheta \) is a pseudo-valuation on \(A.\)

Proof. From Lemma 4 and given condition (b), we have \(\vartheta (1)= \vartheta (u\diamond u)\leq \vartheta (u)\) for all \( u\in A\) and hence \(\vartheta (1)=0,\) using given condition (a). Now, from Definition 1, Lemma 4 and given condition (c), we get \(\vartheta (v)=\vartheta (1\diamond v)= \vartheta (((u\diamond v)\diamond (u\diamond v))\diamond v)\leq \vartheta (u\diamond v)+ \vartheta (u)\) for all \( u, v\in A\). It follows from given condition (d) that \(\vartheta (u\diamond w)\leq \vartheta (v\diamond (u\diamond w))+ \vartheta (v)\leq \vartheta (u\diamond (v\diamond w))+ \vartheta (v)\) for all \( u, v, w\in A\). Therefore \(\vartheta \) is a pseudo-valuation on \(A\).

Corollary 11. Let \(\vartheta \) be a real-valued function on a JU-algebra \(A\) satisfying the following conditions:
(a) \(\vartheta (1)=0,\)
(b) \(\vartheta (u\diamond v)\leq \vartheta (v)\), for all \( u, v\in A,\)
(c) \( \vartheta ((u\diamond (v\diamond w)\diamond w))\leq \vartheta (u) + \vartheta (v)\) for all \( u, v, w\in A\),
(d) \(\vartheta (v\diamond (u\diamond w))\leq \vartheta (u\diamond (v\diamond w)).\)
Then \(\vartheta \) is a pseudo-valuation on \(A\).

Theorem 12. If \(\vartheta \) is a pseudo-valuation on a JU-algebra \(A\), then \(\vartheta (v)\leq \vartheta (u\diamond v) + \vartheta (u)\), for all \( u, v\in A\).

Proof. Let \(m = (u\diamond v)\diamond v\) for any \(u, v\in A\), and \(n = u\diamond v\). Then \(v = 1\diamond v= (((u\diamond v)\diamond v)\diamond ((u\diamond v)\diamond v))\diamond v = (m \diamond (n \diamond v))\diamond v\). It follows from Proposition{} \ref{p1}(2) and Proposition{} \ref{p2} that \(\vartheta (v) = \vartheta ((m \diamond (n\diamond v))\diamond v)\leq \vartheta (m)+ \vartheta (n) = \vartheta ((u\diamond v)\diamond v)+ \vartheta (u\diamond v) \leq \vartheta (u) + \vartheta (u\diamond v)\). This completes the proof.

Theorem 13. Let \(\vartheta \) be a real-valued function on a JU-algebra \(A\) satisfying the following conditions.
(1) \(\vartheta (1)=0\),
(2) \(\vartheta (v)\leq \vartheta (u\diamond v)+ \vartheta (u)\),
(3) \(\vartheta (v\diamond (u\diamond w))\leq \vartheta (u\diamond (v\diamond w))\) for all \( u, v, w \in A.\)
Then \(\vartheta \) is a pseudo-valuation on \(A\).

Proof. For any \(u,v,a,b\in A,\) and using 4 with given condition (2) and (3) we get, \(\vartheta (u\diamond v)\leq \vartheta (v\diamond (u\diamond v))+\vartheta (v) \leq \vartheta (u\diamond (v\diamond v))+\vartheta (v)=\vartheta (v\diamond (1))+\vartheta (v)=\vartheta (1)+\vartheta (v)=\vartheta(v).\) Also, \begin{eqnarray*}\vartheta [(b\diamond (a\diamond u))\diamond u] &\leq& \vartheta [a\diamond ((b\diamond (a\diamond u))\diamond u)]+ \vartheta (a)\\ &\leq& \vartheta [(b\diamond (a\diamond u))\diamond (a\diamond u)]+ \vartheta (a)\\ &\leq& \vartheta [b\diamond[(b\diamond (a\diamond u))\diamond (a\diamond u)]]+\vartheta(a)+\vartheta (b)\\ &\leq&\vartheta [(b\diamond (a\diamond u))\diamond(b\diamond (a\diamond u))] + \vartheta (a)+\vartheta (b)\\ &=&\vartheta (1)+ \vartheta (a)+\vartheta (b)\\ &=&\vartheta (a) + \vartheta (b). \end{eqnarray*} By Corollary 11, we get that \(\vartheta \) is a pseudo-valuation on \(A\).

Proposition 14. If \(\vartheta \) is a pseudo-valuation on a JU-algebra \(A\), then

\begin{equation}\label{eq2} a\leq b\diamond u\Rightarrow \vartheta (u)\leq \vartheta (a) + \vartheta (b) \; \hbox{ for all } a, b, u\in A. \end{equation}
(5)

Proof. Suppose that \(a, b, u\in A\) such that \(a\leq b\diamond u\). Then by Proposition 7 (2) and Theorem 12, we have
\(\vartheta (u)\leq \vartheta ((a\diamond (b\diamond u))\diamond u)+ \vartheta (a\diamond (b\diamond u)) = \vartheta ((a \diamond (b\diamond u))\diamond u) + \vartheta (1) = \vartheta ((a\diamond (b\diamond u))\diamond u)\\ \leq \vartheta (a) + \vartheta (b).\)

Proposition 15. Suppose that \(A\) is JU-algebra. Then every pseudo-valuation \(\vartheta \) on \(A\) satisfies the following inequality: \(\vartheta (u\diamond w)\leq \vartheta (u\diamond v) + \vartheta (v\diamond w)\), for all \( u, v, w\in A.\)

Proof. It follows from \(JU_1\) and Proposition 14.

Theorem 16. If \(\vartheta \) is a pseudo-valuation on a JU-algebra \(A\), then the set \(I:=\{u\in A|\; \vartheta (u) = 0\}\) is an ideal of \(A\).

Proof. We have \(\vartheta (1) = 0\) and hence \(1\in I\). Next, \(u, v, w\in A\) be such that \(v\in I\) and \(u\diamond (v\diamond w)\in I\). Then \(\vartheta (v) = 0\) and \(\vartheta (u\diamond (v\diamond w))=0\). By 6(2), we get \(\vartheta (u\diamond w)\leq \vartheta (u\diamond (v\diamond w)) + \vartheta (v) = 0\) so that \(\vartheta (u\diamond w) = 0\). Hence \(u\diamond w\in I\), therefore \(I\) is an ideal of \(A\).

Example 2. [16] Let \(A=\{1,2,3,4,5\}\) in which \(\diamond \) is defined by the following table

\(\diamond \) \(1\) \(2\) \(3\) \(4\) \(5\)
\(1\) \(1\) \(2\) \(3\) \(4\) \(5\)
\(2\) \(1\) \(1\) \(3\) \(4\) \(5\)
\(3\) \(1\) \(2\) \(1\) \(4\) \(4\)
\(4\) \(1\) \(1\) \(3\) \(1\) \(3\)
\(5\) \(1\) \(1\) \(1\) \(1\) \(1\)
It is easy to see that \(A\) is a JU-algebra. Now, define a real-valued function \(\vartheta \) on \(A\) by \(\vartheta (1)=\vartheta (2)=\vartheta (3)=0\), \(\vartheta (4)=3\), and \(\vartheta (5)=1.\) Then \(I:=\{u\in A \mid \vartheta (u) = 0\}\) = \(\{1,2, 3\}\) is the ideal of \(A.\) But \(\vartheta \) is not a pseudo-valuation as \(\vartheta (3\diamond 5)\nleq \) \(\vartheta (3\diamond (5\diamond 5))\) + \(\vartheta (5).\)

For a real-valued function \(\vartheta \) on a JU-algebra \(A\), define a mapping \(d_{\vartheta }: X\times X\rightarrow \mathbb{R} \) by \(d_{\vartheta} (u, v) = \vartheta (u\diamond v) + \vartheta (v\diamond u)\) for all \( (u, v)\in A\times A\). We have following result.

Theorem 17. Let \(A\) is a JU-algebra. If a real-valued function \(\vartheta \) on \(A\) is a pseudo-valuation on \(A\), then \(d_{\vartheta }\) is a pseudo-metric on \(A\), and so \((X, d_{\vartheta })\) is a pseudo-metric space. (The \(d_{\vartheta }\) is called pseudo-metric induced by pseudo-valuation \(\vartheta \).)

Proof. Clearly, \(d_{\vartheta }\) \((u, v)\) \(\geq 1\), \(m_{\vartheta }\) \((u, u)=1\) and \(m_{\vartheta }\) \((u, v)\) = \(m_{\vartheta }(v, u)\) for all \( u, v\in A\). For any \(u, v, w\in A\) from Proposition 15, we get \(d_{\vartheta }(u, v)+ d_{\vartheta }(v, w) =[\vartheta (u\diamond v)+ \vartheta (v\diamond u)]+[\vartheta (v\diamond w)+ \vartheta (w\diamond v)] = [\vartheta (u\diamond v) + \vartheta (v\diamond w)]+[\vartheta (w\diamond v)+ \vartheta (v\diamond u)]\geq \vartheta (u\diamond w) + \vartheta (w\diamond u) = d_{\vartheta }(u, w)\). Hence \((X, d_{\vartheta })\) is a pseudo-metric space.

Proposition 18. Let \(A\) is a JU-algebra. Then every pseudo-metric \(d_{\vartheta }\) induced by pseudo-valuation \(\vartheta \) satisfies the following inequalities:
(1) \(d_{\vartheta }(u, v) \geq d_{\vartheta } (x\diamond u, x\diamond v)\),
(2) \(d_{\vartheta }(u \diamond v, x\diamond y) \leq d_{\vartheta }(u\diamond v, x\diamond v) + d_{\vartheta }(x\diamond v, x\diamond y)\)
for all \( u, v, x, y\in A\).

Proof. (1) Let \(u, v, a\in A\). By \(JU_1\) \(u\diamond v\leq (x\diamond v)\diamond (x\diamond u)\) and \(v\diamond u\leq (x\diamond u)\diamond (x\diamond v)\). It follows from Proposition \ref{p1}(1) that \(\vartheta (u\diamond v)\geq \vartheta ((x\diamond v)\diamond (x\diamond u))\) and \(\vartheta (v\diamond u)\geq \vartheta ((x\diamond u)\diamond (x\diamond v))\). So \(d_{\vartheta }(u, v) = \vartheta (u\diamond v)+ \vartheta (v\diamond u)\geq \vartheta ((x\diamond u) \diamond (x\diamond u))\)+ \(\vartheta ((x\diamond u)\diamond (x\diamond v)) = d_{\vartheta }(x\diamond u, x\diamond v).\)
(2) Followed by definition of pseudo-metric.

Theorem 19. Let \(\vartheta \) be a real-valued function on a JU-algebra \(A\), if \(d_{\vartheta }\) is a pseudo-metric on \(A\), then \((X\times X, d_{\vartheta }^\diamond)\) is a pseudo-metric space, where $$d^\diamond _{\vartheta }((u, v), (a, b)) = \max\{d_{\vartheta } (u, a), d_{\vartheta }(v, b)\} \hbox{ for all } (u, v), (a, b) \in A\times A.$$

Proof. Suppose \(d_{\vartheta }\) is a pseudo-metric on \(A\). For any \((u, v), (a, b)\in A\times A\), we have \(d^\diamond _{\vartheta }((u, v), (u, v))\) = \(\max\{d_{\vartheta }(u, u), d_{\vartheta }(v, v)\} = 0\) and $$d^\diamond _{\vartheta }((u, v), (a, b)) = \max \{d_{\vartheta }(u, a), d_{\vartheta }(v, b)\} = \max \{d_{\vartheta }(a, u), d_{\vartheta }(b, v)\} = d^\diamond ((a, b), (u, v)).$$ Now let \((u, v), (a, b), (u, v)\in A\times A\). Then we have \begin{eqnarray*}d^\diamond _{\vartheta }((u, v), (u, v))+ d^\diamond _{\vartheta }((u, v), (a, b)) &=& \max \{d_{\vartheta }(u, u), d_{\vartheta }(v, v)\} + \max \{d_{\vartheta }(u, a), d_{\vartheta }(v, b)\}\\ &\geq& \max\{d_{\vartheta }(u, u)+ d_{\vartheta }(u, a), d_{\vartheta }(v, v) + d_{\vartheta }(v, b)\}\\&\geq& \max\{d_{\vartheta }(u, a), d_{\vartheta }(v, b)\} = d^\diamond _{\vartheta }((u, v), (a, b)).\end{eqnarray*} Hence \((X\times X, d^\diamond _{\vartheta })\) is a pseudo-metric space.

Corollary 20. If \(\vartheta : X\to \mathbb{R}\) is a pseudo-valuation on a JU-algebra \(A\), then \((X\times X, d^\diamond _{\vartheta })\) is a pseudo-metric space.

Theorem 21. Let \(A\) is a JU-algebra. If \(\vartheta : X\to \mathbb{R}\) is a valuation on \(A\), then \((X, d_{\vartheta })\) is a metric space.

Proof. Suppose \(\vartheta \) is a valuation on \(A\), then \((X, d_{\vartheta })\) is a pseudo-metric space by Theorem 19. Further consider \(u, v\in A\) be such that \(d_{\vartheta }(u, v) = 0\), then \(0 = d_{\vartheta }(u, v) = \vartheta (u\diamond v)+ \vartheta (v\diamond u)\), and hence \(\vartheta (u\diamond v) = 0\) and \(\vartheta (v\diamond u) = 0\) since \(\vartheta (u)\geq 0\) for all \( u\in A\) and, since \(\vartheta \) is a valuation on \(A\), it follows that \(u\diamond v = 1\) and \(v\diamond u = 1\) so from (condition in the given theorem) that \(u = v\). Hence \((X, d_{\vartheta })\) is a metric space.

Theorem 22. Let \(A\) is a JU-algebra. If \(\vartheta : X\to \mathbb{R}\) is a valuation on \(A\), then \((X\times X, d^\diamond _{\vartheta })\) is a metric space.

Proof. From Corollary 20, we know that \((X\times X, d^\diamond _{\vartheta })\) is a pseudo-metric space. Suppose \((u, v), (a, b)\in A\times A\) be such that \(d^\diamond _{\vartheta }((u, v), (a, b)) = 0\), then \(0 = d^\diamond _{\vartheta }((u, v), (a, b))\) = \(\max \{d_{\vartheta }(u, a), d_{\vartheta }(v, b)\}\), and so \(d_{\vartheta }(u, a) = 0\) = \(d_{\vartheta }(v, b)\). Since \(d_{\vartheta }(u, v) \geq 0\; \) for all \( (u, v)\in A\times A\). Hence \(0 = d_{\vartheta }(u, a) = \vartheta (u\diamond a) + \vartheta (a\diamond u)\) and \(0 = d_{\vartheta }(v, b) = \vartheta (v\diamond b) + \vartheta (b\diamond v)\). It follows that \(\vartheta (u\diamond a) = 0 = \vartheta (a\diamond u)\) and \(\vartheta (v\diamond b) = 0 = \vartheta (b\diamond v)\) so that \(u\diamond a = 1 = a \diamond u\) and \(v\diamond b = 0 = b\diamond v\). Now we have \(a = u\) and \(b = v\), and so \((u, v) = (a, b)\), therefore \((X\times X, d^\diamond _\vartheta )\) is a metric space.

Theorem 23. Let \(A\) is a JU-algebra. If \(\vartheta \) is a valuation on \(A\), then the operation \(\diamond \) in \(A\) is uniformly continuous.

Proof. Consider for any \(\epsilon \geq 0\), if \(d^\diamond _{\vartheta }((u, v), (a, b)) < \frac {\epsilon} {2}\) then \(d_{\vartheta }(u, a) < \frac {\epsilon} {2}\) and \(d_{\vartheta }(v, b) < \frac {\epsilon} {2}.\) This implies that \(d_{\vartheta }(u\diamond v, a\diamond b) \leq d_{\vartheta }(u\diamond v, a\diamond v)+ d_{\vartheta }(a\diamond v, a\diamond b)\leq d_{\vartheta }(u, a)+ d_{\vartheta }(v, b) < \frac {\epsilon} {2}\)+ \(\frac {\epsilon} {2}=\epsilon \) (from 18). Therefore the operation \(\diamond :X\times X\to A\) is uniformly continuous.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Buşneag, C. (2007). Valuations on residuated lattices. Annals of the University of Craiova-Mathematics and Computer Science Series, 34, 21-28. [Google Scholor]
  2. Buşneag, D. (2003). On extensions of pseudo-valuations on Hilbert algebras. Discrete Mathematics, 263(1-3), 11-24. [Google Scholor]
  3. Buşneag D. (1996). Hilbert algebras with valuations. Mathematica Japonica, 44(2), 285-289. [Google Scholor]
  4. Doh, M. I., & Kang, M. S. (2010). BCK/BCI-algebras with pseudo-valuations. Honam Mathematical Journal, 32(2), 217-226. [Google Scholor]
  5. Ghorbani, S. (2010). Quotient BCI-algebras induced by pseudo-valuations. Iranian Journal of Mathematical Sciences and Informatics, 5(2), 13-24.[Google Scholor]
  6. Zhan, J., & Jun, Y. B. (2013). (Implicative) Pseudo-Valuations On \(R_{0}\)-Algebras. University Politehnica Of Bucharest Scientific Bulletin-Series A-Applied Mathematics And Physics, 75(4), 101-112.[Google Scholor]
  7. Yang, Y., & Xin, X. (2017). EQ-algebras with pseudo pre-valuations. Italian Journal of Pure and Applied Maths, 36, 29-48. [Google Scholor]
  8. Mehrshad, S., & Kouhestani, N. (2018). On Pseudo-Valuations on BCK-Algebras. Filomat, 32(12), 4319-4332. [Google Scholor]
  9. Jun, Y. B., Ahn, S. S., & Roh, E. H. (2012). BCC-algebras with pseudo-valuations. Filomat, 26(2), 243-252.[Google Scholor]
  10. Iampan, A. (2017). A new branch of the logical algebra: UP-algebras. Journal of Algebra and Related Topics, 5(1), 35-54. [Google Scholor]
  11. Yaqoob, N., Mostafa, S. M., & Ansari, M. A. (2013). On cubic KU-ideals of KU-algebras. ISRN Algebra, 2013. [Google Scholor]
  12. Ansari, M. A., & Koam, A. N. (2018). Rough approximations in KU-algebras. Italian Journal of Pure and Applied Mathematics, 40, 679-691. [Google Scholor]
  13. Ansari, M. A., Koam, A. N., & Haider, A. (2019). Rough set theory applied to UP-algebras. Italian Journal of Pure and Applied Mathematics, 42. 388-402. [Google Scholor]
  14. Ansari, M., Haidar, A., & Koam, A. (2018). On a Graph Associated to UP-Algebras. Mathematical and Computational Applications, 23(4), 61. [Google Scholor]
  15. Romanoa, D. A. (2019). Pseudo-Valuations on UP-Algebras. Universal Journal of Mathematics and Applications, 2(3), 138-140. [Google Scholor]
  16. Ansari, M. A., Haider, A., & Koam, A. N. (2020). On JU-algebras and p-Closure Ideals. Computer Science, 15(1), 135-154. [Google Scholor]
  17. Kawila, K., Udomsetchai, C., & Iampan, A. (2018). Bipolar fuzzy UP-algebras. Mathematical and Computational Applications, 23(4), 69. [Google Scholor]
]]>
On the vector Fourier multipliers for compact groups https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/on-the-vector-fourier-multipliers-for-compact-groups/ Sat, 07 Dec 2019 21:45:14 +0000 https://old.pisrt.org/?p=3535
OMS-Vol. 3 (2019), Issue 1, pp. 433 - 439 Open Access Full-Text PDF
Abudulaï Issa, Yaogan Mensah
Abstract: This paper studies some properties of the Fourier multiplier operators on a compact group when the underlying multiplication functions (the symbols) defined on the dual object take values in a Banach algebra. More precisely, boundedness properties for such Fourier multiplier operators for the space of Bochner strong integrable functions and for the (vector) \(p\)-Fourier spaces are investigated.
]]>

Open Journal of Mathematical Sciences

On the vector Fourier multipliers for compact groups

Abudulaï Issa, Yaogan Mensah\(^1\)
Department of Mathematics, University of Lomé, POBox 1515, Lomé, Togo.; (A.I & Y.M)
International Chair in Mathematical Physics and Applications (ICMPA)-Unesco Chair, University of Abomey-Calavi, Benin.; (Y.M)
\(^{1}\)Corresponding Author: mensahyaogan2@gmail.com

Abstract

This paper studies some properties of the Fourier multiplier operators on a compact group when the underlying multiplication functions (the symbols) defined on the dual object take values in a Banach algebra. More precisely, boundedness properties for such Fourier multiplier operators for the space of Bochner strong integrable functions and for the (vector) \(p\)-Fourier spaces are investigated.

Keywords:

Compact group, Fourier transform, Fourier multiplier, \(p\)-Fourier space.

1. Introduction

The theory of Fourier multipliers is part of the theory of Fourier integral operators and localization operators. Roughly speaking, a Fourier multiplier is an operator defined through a multiplication by a symbol on a function's frequency spectrum. It is a way to reshape the frequencies involved in the function. Therefore this theory has many applications for instance in Signal processing where the Fourier multiplier is called a filter. Research on the Fourier multipliers is very active and quite flourishing. As recent articles in this field we can quote [1, 2, 3].

In [4], Atto et al. investigated the Fourier multipliers for a kind of \(p\)-Fourier spaces introduced in [5]. They obtained important results related to the boundedness of such operators. However, the underlying multiplication function (the symbol) takes values in the set of complex numbers though authors dealt with the Fourier transform of vector valued functions. It may have been interesting to consider vector valued symbols. So, in order to harmonize things, it seems necessary to complete/extend the study by the case where the symbols are vector valued functions. This is the main purpose of this paper. Thanks to the Fourier inversion formula in [6], it is possible to introduce what we call a vector Fourier multiplier.

The paper is organized as follows. In Section 2, we set some preliminaries related to the Fourier transform of vector valued functions. In Section 3, we investigate properties of the Fourier multipliers for Bochner integrable functions and in Section 4, we study the Fourier multipliers for \(p\)-Fourier spaces.

2. Preliminaries

Details on group representations can be found in [7, 8]. Let \( G \) be a compact group with normalized Haar measure \(dx\). We denote by \(\widehat{G}\) the unitary dual of \( G\), that is the set of equivalence classes of unitary irreducible representations of \(G\). In each class \( \sigma \in \widehat{G} \), we choose an element, still denoted \(\sigma\), with representation space \(H_\sigma\) the dimension of which is denoted by \(d_\sigma\). We designate by \( (\xi^{\sigma}_{1},\xi^{\sigma}_{2},.....,\xi^{\sigma}_{d_\sigma}) \) a basis of \( H_{\sigma}\). Let \(\mathfrak{A}\) be a complex Banach algebra. The Fourier transform \(\widehat{f}\) of a strong Bochner integrable function \(f\in L^1(G,\mathfrak{A})\) is given by the formula
\begin{eqnarray} \widehat{f}(\sigma)(\xi\otimes\eta)= \int_{G}\langle \sigma (x^{-1})\xi,\eta\rangle f(x)dx \end{eqnarray}
(1)
where \( \langle \cdot, \cdot\rangle\) denotes the inner product in \(H_\sigma\). Here \(\widehat{f}(\sigma)\) is a bounded linear operator from \(H_\sigma\otimes\overline{H}_\sigma\) into \(\mathfrak{A}\) where \(\overline{H}_\sigma\) is the conjugate Hilbert space of \(H_\sigma\); see [9]. Now set
\begin{eqnarray} u^{\sigma}_{ij}(x)=\langle \sigma (x)\xi^{\sigma}_j,\xi^{\sigma}_i \rangle \end{eqnarray}
(2)
The functions \(u^{\sigma}_{ij}\) satisfy the Schur orthogonality relations
\begin{equation} \displaystyle\int_{G}u_{ij}^{\sigma}(x)\overline{u_{mn}^{\sigma'}(x)}dx=\frac{1}{d_{\sigma}}\delta_{im}\delta_{jn}\delta_{\sigma\sigma'} \end{equation}
(3)
where \( \delta \) is the Kronecker delta symbol. The Fourier inversion formula is
\begin{eqnarray} f=\sum\limits_{\sigma \in \widehat{G}}d_\sigma\sum\limits_{i=1}^ {d_\sigma}\sum\limits_{j=1}^ {d_\sigma}\widehat{f}(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})u_{ij}^{\sigma}. \end{eqnarray}
(4)
Using this inversion formula, we can define what are vector Fourier multipliers and tackle the study of their properties. This is done in the next two sections.

3. Vector Fourier multipliers for \(L^{1}(G,\mathfrak{A})\)

Set
\begin{eqnarray} \mathcal{L}(\widehat{G},\mathfrak{A})=\prod_{\sigma \in\widehat{G}} \mathcal{L}(H_{\sigma} \otimes \overline{H}_{\sigma},\mathfrak{A}) \end{eqnarray}
(5)
where \(\mathcal{L}(H_{\sigma} \otimes \overline{H}_{\sigma},\mathfrak{A})\) designates the set of linear operators from \(H_{\sigma} \otimes \overline{H}_{\sigma}\) into \(\mathfrak{A}\). Following [6], we consider the product \(\times\) on \(\mathcal{L}(\widehat{G},\mathfrak{A}) \) defined as follows. If \( \phi_1, \phi_2 \in \mathcal{L}(\widehat{G},\mathfrak{A}) \) then \( \phi_1 \times \phi_2 \) is given by
\begin{eqnarray} (\phi_1 \times \phi_2)(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})= \sum_{k=1}^{d_\sigma}\phi_1(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{i}^{\sigma})\phi_2(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{k}^{\sigma}). \end{eqnarray}
(6)
Let \(\varphi: \widehat{G} \rightarrow \mathfrak{A}\) be a bounded function. We define the vector Fourier multiplier \(T_\varphi\) by the formula
\begin{eqnarray} T_{\varphi}f=\sum_{\sigma\in\widehat{G}}d_{\sigma} \sum_{i=1}^{d_{\sigma}} \sum_{j=1}^{d_{\sigma}} \varphi(\sigma) \widehat{f}(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})u_{ij}^{\sigma}. \end{eqnarray}
(7)
What we consider here generalizes the case treated in [4] which corresponds to the particular case \(\mathfrak{A}=\mathbb{C}\). We denote by \( \mathcal{M}(L^{1}(G,\mathfrak{A})) \) the set of all vector Fourier multipliers on \( L^{1}(G,\mathfrak{A})\). We introduce the following operation which we may need.
\begin{eqnarray} (\varphi \boxtimes \widehat{f})(\sigma)(\xi\otimes\eta)= \varphi(\sigma) \widehat{f}(\sigma)(\xi \otimes\eta), \xi \in H_{\sigma}, \, \eta \in \ \overline{ H}_{\sigma}. \end{eqnarray}
(8)
Taking inspiration from [6, Therem 3.1.] we state the following theorem.

Theorem 1. If \(f,g \in L^1 (G, \mathfrak{A})\) then \( \widehat{f \ast g}= \widehat{f} \times \widehat{g}\).

The next theorem gives a characterization of the vector Fourier multipliers on \( L^{1}(G,\mathfrak{A})\).

Theorem 2.

\begin{eqnarray} T_{\varphi} \in \mathcal{M}(L^{1}(G,\mathfrak{A})) \Longleftrightarrow\widehat{T_{\varphi}f}=\varphi \boxtimes\widehat{f}, \forall f \in L^{1}(G,\mathfrak{A}) . \end{eqnarray}
(9)

Proof. Let \( T_\varphi \in \mathcal{M}(L^{1}(G,\mathfrak{A}))\). Let \( \sigma' \in \widehat{G}\). Vectors \(\xi \in H_{\sigma'}\) and \(\eta \in \overline{H}_{\sigma'}\) can be written in the forms \(\xi= \sum\limits_{n=1}^{d_{\sigma'}}\alpha_{n}\xi_{n}^{\sigma'}\) and \( \eta=\sum\limits_{m=1}^{d_{\sigma'}} \overline{\beta}_m\xi_{m}^{\sigma'} \) in the basis \( (\xi_{1}^{\sigma'}, \xi_{2}^{\sigma'},\cdots,\xi_{d_{{\sigma'}}}^{\sigma'} )\) of \(H_{\sigma'}\). Then \begin{align*} \widehat{T_{\varphi}f}(\sigma')(\xi\otimes \eta) &=\widehat{T_{\varphi}f}(\sigma')(\sum_{n=1}^{d_{\sigma'}}\alpha_{n}\xi_{n}^{\sigma'} \otimes\sum_{m=1}^{d_{\sigma'}} \overline{\beta}_m\xi_{m}^{\sigma'} )\\ &=\sum_{n=1}^{d_{\sigma'}}\sum_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m\widehat{T_{\varphi}f}(\sigma')(\xi_{n}^{\sigma'}\otimes\xi_{m}^{\sigma'})\\ &=\sum_{n=1}^{d_{\sigma'}}\sum_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m\displaystyle\int_{G}\langle\sigma'(x^{-1})\xi_{n}^{\sigma'},\xi_{m}^{\sigma'}\rangle(T_\varphi f)(x)dx\\ &=\sum_{n=1}^{d_{\sigma'}}\sum_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m\displaystyle\int_{G}\overline{u_{nm}^{\sigma'}(x)}(T_{\varphi}f)(x)dx\\ &=\sum_{n=1}^{d_{\sigma'}}\sum_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m\displaystyle\int_{G}\overline{u_{nm}^{\sigma'}(x)}\sum\limits_{\sigma\in\widehat{G}}d_\sigma \sum\limits_{i=1}^{d_{\sigma}} \sum\limits_{j=1}^{d_{\sigma}} \varphi(\sigma) \widehat{f}(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})u_{ij}^{\sigma}(x)dx \\ &=\sum_{n=1}^{d_{\sigma'}}\sum_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m \sum\limits_{\sigma\in \widehat{G}}d_\sigma\sum_{i=1}^{d_\sigma}\sum_{j=1}^{d_\sigma} \varphi(\sigma)\widehat{f}(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})\displaystyle\int_{G}\overline{u_{mn}^{\sigma'}(x)}u_{ij}^{\sigma}(x)dx. \end{align*} By appealing to the Schur orthogonality relations, we get \begin{eqnarray*} \widehat{T_{\varphi}f}(\sigma')(\xi\otimes\eta)&=& \sum\limits_{n=1}^{d_{\sigma'}}\sum\limits_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m d_{\sigma'} \sum\limits_{i=1}^{d_{\sigma}}\sum\limits_{j=1}^{d_{\sigma}} \varphi(\sigma')\widehat{f} (\sigma')(\xi_{j}^{\sigma'}\otimes\xi_{i}^{\sigma'})(\frac{1}{d_{\sigma'}}\delta_{im} \delta_{jn})\\ &=& \sum\limits_{n=1}^{d_{\sigma'}}\sum\limits_{m=1}^{d_{\sigma'}}\alpha_{n}\overline{\beta}_m \varphi(\sigma') \widehat{f} (\sigma')(\xi_{n}^{\sigma'}\otimes\xi_{m}^{\sigma'})\\ &=&\varphi(\sigma') \widehat{f}(\sigma')(\sum_{n=1}^{d_{\sigma'}}\alpha_{n}\xi_{n}^{\sigma'}\otimes\sum_{m=1}^{d_{\sigma'}}\overline{\beta_{m}}\xi_{m}^{\sigma'})\\ &=&\varphi(\sigma') \widehat{f}(\sigma')(\xi\otimes\eta)\\ &=&(\varphi\boxtimes \widehat{f})(\sigma')(\xi\otimes\eta) \end{eqnarray*} Thus \( \widehat{T_{\varphi}f}=\varphi\boxtimes \widehat{f}. \) Conversely, let us assume that \(\forall f \in L^{1}(G,\mathfrak{A}), \widehat{T_{\varphi}f}=\varphi \boxtimes \widehat{f}\). Then, using the inversion formula we obtain \begin{align*} T_{\varphi}f&=\sum\limits_{\sigma\in \widehat{G}}d_\sigma\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\widehat{T_{\varphi}f}(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})u_{ij}^{\sigma}\\ &=\sum\limits_{\sigma \in \widehat{G}}d_\sigma\sum_{i=1}^{d_\sigma}\sum_{j=1}^{d_\sigma}(\varphi\boxtimes\widehat{f})(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})u_{ij}^{\sigma} \\ &=\sum\limits_{\sigma \in \widehat{G}}d_\sigma\sum_{i=1}^{d_\sigma}\sum_{j=1}^{d_\sigma}\varphi(\sigma) \widehat{f}(\sigma)(\xi_{j}^{\sigma}\otimes\xi_{i}^{\sigma})u_{ij}^{\sigma}. \end{align*} Thus \( T_{\varphi}f \) is a vector Fourier multiplier for \( L^{1}(G,\mathfrak{A})\).

Theorem 3. If \(T_{\varphi}, T_{\phi} \in \mathcal{M}(L^{1}(G,\mathfrak{A}))\) and \(f,g\in L^{1}(G,\mathfrak{A}) \) then the following equalities hold:

\begin{equation} T_{\varphi}(f \ast g)=T_{\varphi}f \ast g. \end{equation}
(10)
\begin{equation} T_{\varphi \phi}(f \ast g)=(T_{\varphi}T_{\phi}f) \ast g. \end{equation}
(11)

Proof. Let \( T_{\varphi} \in \mathcal{M}(L^{1}(G,\mathfrak{A}))\) and \(f,g\in L^{1}(G,\mathfrak{A})\). \begin{align*} \mathcal{F}({T_{\varphi}(f \ast g)})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma}) &=(\varphi \boxtimes\widehat{f \ast g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &=\varphi(\sigma) \widehat{f \ast g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &= \varphi(\sigma) [(\widehat{f}\times \widehat{g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})]\\ &= \varphi(\sigma) [\sum_{k=1}^{d_{\sigma}}\widehat{f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma}) \widehat{g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})]\\ &=\sum_{k=1}^{d_{\sigma}}\varphi(\sigma) \widehat{f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma}) \widehat{g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma}) \\ &=\sum_{k=1}^{d_{\sigma}}[\varphi(\sigma)\widehat{f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})] \widehat{g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=\sum_{k=1}^{d_{\sigma}}[(\varphi \boxtimes\widehat{f})(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})] \widehat{g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma}) \\ &=\sum_{k=1}^{d_{\sigma}}[\widehat{T_{\varphi}f})(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})] \widehat{g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=(\widehat{T_{\varphi}f}\times \widehat{g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &=\mathcal{F}({T_{\varphi}f \ast g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma}) \end{align*} Since the Fourier transformation is injective, we have \(T_{\varphi}(f \ast g)=T_{\varphi}f \ast g.\) Let \( T_{\varphi}, T_{\phi} \in \mathcal{M}(L^{1}(G,\mathfrak{A}))\) and \(f,g\in L^{1}(G,\mathfrak{A})\). \begin{eqnarray*} \mathcal{F}({T_{\varphi \phi}(f \ast g)})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma}) &=&(\varphi \phi\boxtimes \widehat{ f \ast g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &=&(\varphi \phi)(\sigma) (\widehat{ f \ast g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &=&\varphi(\sigma) \phi(\sigma) (\widehat{ f} \times \widehat{ g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &=&\varphi(\sigma) \phi(\sigma) \sum_{k=1}^{d_{\sigma}}\widehat{f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&\varphi(\sigma) \sum_{k=1}^{d_{\sigma}}\phi(\sigma)\widehat{f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&\varphi(\sigma) \sum_{k=1}^{d_{\sigma}}(\phi \boxtimes\widehat{f})(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&\varphi(\sigma) \sum_{k=1}^{d_{\sigma}}\widehat{T_{\phi}f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&\sum_{k=1}^{d_{\sigma}}\varphi(\sigma)\widehat{T_{\phi}f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&\sum_{k=1}^{d_{\sigma}}(\varphi\boxtimes\widehat{T_{\phi}f})(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&\sum_{k=1}^{d_{\sigma}}\widehat{T_{\varphi}T_{\phi}f}(\sigma)(\xi_{k}^{\sigma}\otimes\xi_{j}^{\sigma})\widehat{ g}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{k}^{\sigma})\\ &=&(\widehat{T_{\varphi}T_{\phi}f}\times\widehat{g})(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma})\\ &=&\mathcal{F}((T_\varphi T_\phi f) \ast g) (\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma}). \end{eqnarray*} Therefore, by the injectivity of the Fourier transformation, we have \begin{eqnarray*} T_{\varphi \phi}(f \ast g) = ( T_{\varphi}T_{\phi}f) \ast g. \end{eqnarray*}

For \( \psi \in \mathcal{L}(\widehat{G},\mathfrak{A})\), we set
\begin{eqnarray} \|\psi\|_\infty = \sup \lbrace\|\psi(\sigma)\|\,:\, \sigma\in \widehat{G} \rbrace \end{eqnarray}
(12)
where
\begin{eqnarray} \|\psi(\sigma)\| = \sup \lbrace \|\psi(\sigma)(\xi\otimes\eta)\| \,:\, \|\xi\|\leq 1, \|\eta\|\leq 1\rbrace. \end{eqnarray}
(13)
We also consider the set
\begin{eqnarray} \mathcal{L}_{\infty}(\widehat{G},\mathfrak{A})=\lbrace\psi \in \mathcal{L}(\widehat{G},\mathfrak{A})\,:\,\|\psi\|_\infty< \infty \rbrace. \end{eqnarray}
(14)
We can now state the following theorem.

Theorem 4. If \( T_\varphi \in \mathcal{M}(L^{1}(G,\mathfrak{A}))\) and \(f\in L^{1}(G,\mathfrak{A})\) then \(\widehat{T_{\varphi}f}\in \mathcal{L}_{\infty}(\widehat{G},\mathfrak{A})\) and \begin{eqnarray*} \lVert \widehat{T_{\varphi}f}\rVert_{\infty} \leq\lVert\varphi\rVert_{\infty} \lVert f\rVert_1. \end{eqnarray*}

Proof. Let \(\xi\otimes\eta \in H_{\sigma} \otimes \overline{H}_\sigma\). \begin{eqnarray*} \lVert \widehat{T_{\varphi}f}(\sigma)(\xi\otimes\eta)\rVert &=&\lVert (\varphi \boxtimes \widehat{f})(\sigma)(\xi\otimes\eta)\rVert \\ &=&\lVert \varphi(\sigma) \widehat{f}(\sigma)(\xi\otimes\eta)\rVert\\ &\leq & \lVert \varphi(\sigma) \rVert \lVert \widehat{f}(\sigma)(\xi\otimes\eta)\rVert\\ &\leq & \lVert\varphi\rVert_{\infty} \lVert \xi \rVert \lVert \eta \rVert \lVert f \rVert_1. \end{eqnarray*} Hence \begin{eqnarray*} \lVert \widehat{T_{\varphi}f}(\sigma)\rVert \leq \lVert \varphi\rVert_{\infty} \lVert f \rVert_1,\, \forall \sigma \in \widehat{G}. \end{eqnarray*} Thus \begin{eqnarray*} \lVert \widehat{T_{\varphi}f}\rVert_{\infty} \leq \lVert \varphi\rVert_{\infty} \lVert f \rVert_1. \end{eqnarray*}

4. Vector Fourier multipliers on \(p\)-Fourier spaces

For \( 1 \leq p < \infty\), consider
\begin{eqnarray} \mathcal{L}_{p}(\widehat{G},\mathfrak{A})=\lbrace\psi \in \mathcal{L}(\widehat{G},\mathfrak{A}) \,:\, \sum\limits_{\sigma\in \widehat{G}}d_\sigma\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\|\psi(\sigma)(\xi_i^\sigma\otimes\xi_j^\sigma)\|^p< \infty \rbrace. \end{eqnarray}
(15)
The space \(\mathcal{L}_{p}(\widehat{G},\mathfrak{A})\) is a Banach space if it is endowed with the norm
\begin{eqnarray} \|\psi\|_{\mathcal{L}_p}= \left(\sum\limits_{\sigma\in \widehat{G}}d_\sigma\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\|\psi(\sigma)(\xi_i^\sigma\otimes\xi_j^\sigma)\|^p\right)^{\frac{1}{p}}. \end{eqnarray}
(16)
Call \(p\)-Fourier spaces the spaces
\begin{eqnarray} \mathcal{A}_{p}(G,\mathfrak{A})=\lbrace f\in L^{1}(G,\mathfrak{A})\,:\, \widehat{f} \in \mathcal{L}_{p}(\widehat{G},\mathfrak{A})\rbrace, \, 1\leq p < \infty. \end{eqnarray}
(17)
On the space \( \mathcal{A}_{p}(G,\mathfrak{A}) \) the following two norms are defined:
\begin{eqnarray} \|f\|_{\mathcal{A}_{p}}=\|f\|_{1} + \| \widehat{f} \|_{\mathcal{L}_{p}},\quad f \in \mathcal{A}_{p}(G,\mathfrak{A}) \end{eqnarray}
(18)
\begin{eqnarray} \|f\|^{\mathcal{A}_{p}}= \| \widehat{f} \|_{\mathcal{L}_{p}},\quad f \in \mathcal{A}_{p}(G,\mathfrak{A}) \end{eqnarray}
(19)
where \( \|.\|_{1} \) and \( \|.\|_{\mathcal{L}_{p}} \) denote the norm in \( L^{1}(G,\mathfrak{A}) \) and the norm in \(\mathcal{L}_{p}(\widehat{G} ,\mathfrak{A}) \) respectively. Equipped with each of the two norms \( \|.\|_{\mathcal{A}_{p}} \) and \( \|.\|^{\mathcal{A}_{p}} \), it had been proved that the space \( \mathcal{A}_{p}(G,\mathfrak{A})\) is a Banach space [5]. The following theorem expresses the invariance of the space \(\mathcal{A}_{p}(G,\mathfrak{A})\) by the Fourier multiplier \(T_{\varphi}\).

Theorem 6. If \(f \in \mathcal{A}_{p}(G,\mathfrak{A})\) then \(T_{\varphi}f \in \mathcal{A}_{p}(G,\mathfrak{A})\).

Proof. Let us assume that \( f \in \mathcal{A}_{p}(G,\mathfrak{A})\). Then \begin{eqnarray*} \lVert \widehat{ T_{\varphi}f }\rVert_{\mathcal{L}_p}^{p} &=&\sum\limits_{\sigma \in \widehat{G}}d_{\sigma}\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\lVert\widehat{T_{\varphi}f}(\sigma)(\xi_{i}^{\sigma}\otimes\xi_{j}^{\sigma}) \rVert^{p}\\ &=&\sum\limits_{\sigma \in \widehat{G}}d_{\sigma}\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\lVert (\varphi \boxtimes \widehat{f})(\sigma)(\xi_{i}^{\sigma}\otimes \xi_{j}^{\sigma}) \rVert^{p}\\ &=&\sum\limits_{\sigma \in \widehat{G}}d_{\sigma}\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\lVert \varphi(\sigma) \widehat{f}(\sigma)(\xi_{i}^{\sigma}\otimes \xi_{j}^{\sigma}) \rVert^{p}\\ &\leq &\sum\limits_{\sigma \in \widehat{G}}d_{\sigma}\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\lVert \varphi(\sigma) \rVert^{p}\lVert \widehat{f}(\sigma)(\xi_{i}^{\sigma}\otimes \xi_{j}^{\sigma}) \rVert^{p}. \end{eqnarray*} Since \( \varphi \) is bounded, we obtain \begin{eqnarray*} \lVert \widehat{ T_{\varphi}f }\rVert_{\mathcal{L}_{p}}^{p} & \leq & \|\varphi\|_\infty^{p}\sum\limits_{\sigma \in \widehat{G}}d_{\sigma}\sum\limits_{i=1}^{d_\sigma}\sum\limits_{j=1}^{d_\sigma}\lVert \widehat{f}(\sigma)(\xi_{i}^{\sigma}\otimes \xi_{j}^{\sigma}) \rVert^{p}.\\ & \leq & \|\varphi\|_\infty^{p} \lVert \widehat{f} \rVert_{\mathcal{L}_{p}}^{p}< \infty. \end{eqnarray*} Thus \( T_{\varphi}f \in \mathcal{A}_{p}(G,\mathfrak{A})\).

Remark 1. From the inclusion property in Theorem 5, One can extend the operator \(T_\varphi\) to the topological dual \(\mathcal{A}_p^*(G, \mathfrak{A})\) of \( (\mathcal{A}_p(G, \mathfrak{A}), \|\cdot\|_{\mathcal{A}_p})\) or \((\mathcal{A}_p(G, \mathfrak{A}), \|\cdot\|^{\mathcal{A}_p})\), exactly as the Fourier tansform is extended from the Schwartz space to the space of tempered distributions, by the relation

\begin{equation} \langle T_\varphi X^*, f \rangle =\langle X^*, T_\varphi f \rangle, \, X^*\in \mathcal{A}_p^*(G, \mathfrak{A}), \, f\in \mathcal{A}_p(G, \mathfrak{A}). \end{equation}
(20)

Corollary 6. \(T_\varphi\) is a bounded operator on \(\mathcal{A}_p(G, \mathfrak{A})\) when the latter is endowed with the norm \(\|\cdot\|^{\mathcal{A}_p}\). \end{corollary}

Proof. In the proof of Theorem 5 we have established that \( \lVert \widehat{ T_{\varphi}f }\rVert_{\mathcal{L}_{p}} \leq \|\varphi\|_\infty\lVert\widehat{f}\rVert_{\mathcal{L}_{p}}\). But \( \lVert \widehat{ T_{\varphi}f }\rVert_{\mathcal{L}_{p}}=\lVert T_{\varphi}f \rVert^{{\mathcal{A}_{p}}} \) and \( \lVert \widehat{f} \rVert_{\mathcal{L}_{p}}=\lVert f \rVert^{{\mathcal{A}_{p}}} \). Therefore \begin{eqnarray*} \lVert T_{\varphi}f \rVert^{{\mathcal{A}_{p}}} \leq \|\varphi\|_\infty \lVert f \rVert^{\mathcal{A}_{p}}. \end{eqnarray*} Thus \( T_{\varphi} \) is bounded on \( \mathcal{A}_{p}(G,\mathfrak{A}) \) endowed with the norm \( \lVert.\rVert^{\mathcal{A}_{p}}.\)

Theorem 7. If \( T_{\varphi} \) is a bounded operator on \( L^{1}(G,\mathfrak{A}) \) then \( T_{\varphi} \) is also a bounded operator on \( \mathcal{A}_{p}(G,\mathfrak{A})\) when the latter is endowed with the norm \( \lVert.\rVert_{\mathcal{A}_{p}}\).

Proof. \begin{eqnarray*} \lVert T_{\varphi}f \rVert_{\mathcal{A}_{p}}&=& \lVert T_{\varphi}f\rVert_1+\lVert \widehat{T_{\varphi}f}\rVert_{\mathcal{L}_{p}}\\ &\leq& \|T_\varphi\|\lVert f \rVert_1+ \|\varphi\|_\infty \lVert \widehat{f}\rVert_{\mathcal{L}_p}\\ &\leq & \max\{ \|T_\varphi\|, \|\varphi\|_\infty \} (\lVert f \rVert_1 + \lVert \widehat{f}\rVert_{\mathcal{L}_p})\\ &\leq & C \lVert f \rVert_{\mathcal{A}_{p}} \end{eqnarray*} where \(C=\max\{ \|T_\varphi\|, \|\varphi\|_\infty \}\). Thus \( T_{\varphi} \) is bounded on \( \mathcal{A}_{p}(G,\mathfrak{A})\) endowed with the norm \( \lVert \cdot \rVert_{\mathcal{A}_{p}}\) .

Acknowledgments

The authors would like to express their thanks to the referee for his useful remarks.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Cleanthous, G., Georgiadis, A. G. & Nielsen, M. (2019). Fourier multipliers on anisotropic mixed-norm spaces of distributions. Mathematica Scandinavica, 124 (2), 289-304. [Google Scholor]
  2. Julio, D. & Ruzhansky, M. (2018). Fourier multipliers in Hilbert spaces., in Integral Fourier Operators, Proceedings of a Summer School, ed. Sylvie Paycha and Pierre. J. Clavier, 3 167-191, Germany Universit\"{a}tsverlag Potsdam, Potsdam. [Google Scholor]
  3. Rozendaal; J. (2018). Fourier multipliers theorems involving type and cotype. Journal of Fourier Analysis and Applications, 24(2), 583-619.[Google Scholor]
  4. Atto, E. J., Mensah, Y. & Assiamoua, V. S. K. (2014). The Fourier multipliers of \(p\)-Fourier spaces on compact groups. British Journal of Mathematics and Computer Science, 4(5), 667-673. [Google Scholor]
  5. Mensah, Y. & Assiamoua, V. S. K. (2010). The \(p\)-Fourier spaces \(\mathcal{A}_p(G, A)\) of vector valued functions on compact groups. Advances and Applications in Mathematical Science, 6(1), 59-66. [Google Scholor]
  6. Assiamoua, V. S. K. & Olubummo, A. (1989). Fourier-Stieltjes transforms of vector-valued measures on compact groups. Acta Scientiarum Mathematicarum (Szeged), 53, 301-307. [Google Scholor]
  7. Deitmar, A. & Echterhoff, S. (2009). Principles of Harmonics Analysis. Springer, New York. [Google Scholor]
  8. Gaal, S. A. (1973). Linear analysis and representation theory. Springer, Berlin. [Google Scholor]
  9. Mensah, Y. (2013). Facts about the Fourier-Stieltjes transform of vector measures on compact groups. International Journal of Analysis and Applications, 2(1), 19-25. [Google Scholor]
]]>
Analysis of the dynamics of avian influenza A(H7N9) epidemic model with re-infection https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/analysis-of-the-dynamics-of-avian-influenza-ah7n9-epidemic-model-with-re-infection/ Sat, 07 Dec 2019 21:29:24 +0000 https://old.pisrt.org/?p=3531
OMS-Vol. 3 (2019), Issue 1, pp. 417 - 432 Open Access Full-Text PDF
Abayomi Samuel OKE, Oluwafemi Isaac BADA
Abstract: Since the emergence of the avian influenza A(H7N9) in the year 2013 in China, several researches have been carried out to investigate the spread. In this paper, a mathematical model describing the transmission dynamics of avian influenza A(H7N9) between human and poultry proposed by Li et al. [1] is modified by introducing re-infections into the susceptible human compartment. The method of next generation matrix is used to calculate the reproduction number. We also establish the local and global stability of the equilibria using Lyapunov functions. Finally, we use numerical simulations to validate our results.
]]>

Open Journal of Mathematical Sciences

Analysis of the dynamics of avian influenza A(H7N9) epidemic model with re-infection

Abayomi Samuel OKE\(^1\), Oluwafemi Isaac BADA
Department of Mathematical Sciences, Adekunle Ajasin University, P.M.B. 001, Akungba Akoko, Ondo State, Nigeria.; (A.S.O)
Department of Mathematical Sciences, University of Benin, P.M.B. 1154, Benin City, Nigeria.; (O.I.B)
\(^{1}\)Corresponding Author: okeabayomisamuel@gmail.com, abayomi.oke@aaua.edu.ng

Abstract

Since the emergence of the avian influenza A(H7N9) in the year 2013 in China, several researches have been carried out to investigate the spread. In this paper, a mathematical model describing the transmission dynamics of avian influenza A(H7N9) between human and poultry proposed by Li et al. [1] is modified by introducing re-infections into the susceptible human compartment. The method of next generation matrix is used to calculate the reproduction number. We also establish the local and global stability of the equilibria using Lyapunov functions. Finally, we use numerical simulations to validate our results.

Keywords:

Avian influenza A(H7N9), reproduction number, Lyapunov functions, next generation matrix, re-infections.
]]>
On chromatic polynomial of certain families of dendrimer graphs https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/on-chromatic-polynomial-of-certain-families-of-dendrimer-graphs/ Sat, 07 Dec 2019 19:04:43 +0000 https://old.pisrt.org/?p=3529
OMS-Vol. 3 (2019), Issue 1, pp. 404 - 416 Open Access Full-Text PDF
Aqsa Shah, Syed Ahtsham Ul Haq Bokhary
Abstract: Let \(G\) be a simple graph with vertex set \(V(G)\) and edge set \(E(G)\). A mapping \(g:V (G)\rightarrow\{1,2,...t\}\) is called \(t\)-coloring if for every edge \(e = (u, v)\), we have \(g(u) \neq g(v)\). The chromatic number of the graph \(G\) is the minimum number of colors that are required to properly color the graph. The chromatic polynomial of the graph \(G\), denoted by \(P(G, t)\) is the number of all possible proper coloring of \(G\). Dendrimers are hyper-branched macromolecules, with a rigorously tailored architecture. They can be synthesized in a controlled manner either by a divergent or a convergent procedure. Dendrimers have gained a wide range of applications in supra-molecular chemistry, particularly in host guest reactions and self-assembly processes. Their applications in chemistry, biology and nano-science are unlimited. In this paper, the chromatic polynomials for certain families of dendrimer nanostars have been computed.
]]>

Open Journal of Mathematical Sciences

On chromatic polynomial of certain families of dendrimer graphs

Aqsa Shah, Syed Ahtsham Ul Haq Bokhary\(^1\)
Centre of Advance Studies in Pure and Applied Mathematics, Bahauddin Zakariya University, Multan, Pakistan.; (A.S & S.A.U.H.B)
\(^{1}\)Corresponding Author: sihtsham@gmail.com

Abstract

Let \(G\) be a simple graph with vertex set \(V(G)\) and edge set \(E(G)\). A mapping \(g:V (G)\rightarrow\{1,2,…t\}\) is called \(t\)-coloring if for every edge \(e = (u, v)\), we have \(g(u) \neq g(v)\). The chromatic number of the graph \(G\) is the minimum number of colors that are required to properly color the graph. The chromatic polynomial of the graph \(G\), denoted by \(P(G, t)\) is the number of all possible proper coloring of \(G\). Dendrimers are hyper-branched macromolecules, with a rigorously tailored architecture. They can be synthesized in a controlled manner either by a divergent or a convergent procedure. Dendrimers have gained a wide range of applications in supra-molecular chemistry, particularly in host guest reactions and self-assembly processes. Their applications in chemistry, biology and nano-science are unlimited. In this paper, the chromatic polynomials for certain families of dendrimer nanostars have been computed.

Keywords:

t-coloring, chromatic polynomials, dendrimer nanostars.

1. Introduction

A simple graph \(G=(V, E)\) is a finite nonempty set \(V(G)\) of objects known as vertices together with a set \(E(G)\) of unordered pairs of distinct vertices of \(G\) known as edges. The \(\textit{t-coloring}\) of a graph \(G\) is a function \(g : V (G)\rightarrow\{1,2,...t\}\) which satisfies \(g(u)\neq\) \(g(v)\) for any edge \(e = (uv)\). In 1912, Birkhoff [1], presented the concept of \textit{chromatic} \textit{polynomial} to solve the four color problem. More precisely, a graph \(G\) is said to be \textit{t}-colorable if such \(t\)-coloring exists and we say \(G\) is \(\textit{t}\)- colorable. The \textit{chromatic} \textit{number} \(\chi (G)\) is the minimal \(t\) for which the graph is \textit{t}-colorable, and we say that \(G\) is \textit{t}-chromatic if \(\chi (G) = t\).

The chromatic polynomial is defined as the number of distinct \(t\)-colorings in a graph \(G\) and is denoted by the \(P(G,t)\). If the two graphs \(G\) and \(H\) have the same chromatic polynomial then the graphs defined as the chromatically equivalent graphs. In recent years, the fields of cheminformatics, physics, computer sciences and other social sciences have attracted their attention for research prospects in graph theory. A lot of research has been done by using concepts of graph theory in these fields. In molecular graphs, the vertices of the graph represent the atoms of the molecule, and the edges represent the chemical bonds.

Dendrimers are hyper-branched macromolecules, with a rigorously tailored architecture. They can be synthesized, in a controlled manner, either by a divergent or a convergent procedure. Dendrimers have gained a wide range of applications in supramolecular chemistry, particularly in host guest reactions and self-assembly processes. Their applications in chemistry, biology and nanoscience are unlimited. Currently, Alikhani et al. in [2] investigated the mathematical properties of the nanostructures and some of their chromatic polynomials. In this paper, we have investigated the chromatic polynomials of certain dendrimers nanostars.

2. Known Results

In this section, we present some known result about chromatic polynomials of dendrimer graphs.

Theorem 1. [3] Fundamental Reduction Theorem. $$P(G,t)=P(G-e,t)-P(G/e,t).$$

Suppose that \(G\) is a simple graph which has an edge \(e\). Then \(G-e\) is a graph obtained from the graph \(G\) by eliminating an edge \(e\) and \(G/e\) is a graph obtained from the graph \(G\) by contracting an edge \(e\) to one vertex. Let \(P_{m+1}\) be a path with vertices \(y_{o},y_{1},y_{2},...,y_{m}\) and \(G\) be any graph. The graph \(G_{v_{o}}(m)=G(m)\) is a graph obtained from \(G\) by identifying a vertex \(v_{o}\) of \(G\) with an end vertex \(y_{o}\) of \(P_{m+1}\), see Figure 1. For example, if \(G\) is a path \(P_{2}\), then \(G(m)=P_{2}(m)\) is the path \(P_{m+2}\).

Figure 1. The Graphs \(G(m)\) and \(G_{1}(m)G_{2}\)]{The Graphs \(G(m)\) and \(G_{1}(m)G_{2}\) 

The chromatic polynomial of the graph \(G(m)\) [2] is:
\begin{equation}\label{a} {P}(G_{m},t)=(t-1)^m{P}(G,t), \end{equation}
(1)
and the chromatic polynomial of the graph \(G_{1}(m)G_{2}\) [2] is:
\begin{equation}\label{b} {P}(G_{1}(m)G_{2},t)=\frac{(t-1)^{m+1}{P}(G_{1},t)P(G_{2},t)}{t}. \end{equation}
(2)

Theorem 2. [4] If the graphs \(G\) and \(H\) have only one common vertex, such that \(V (G)\cup V (H) = { \nu }\), we have $$P(G \cup H,t) =\frac{P(G, t)P(H, t)}{t}.$$

Lemma 3. [5] Let \(G\) be a graph of order \(n\) and size \(m\) and \({P}(G,t)\) be the chromatic polynomial of \(G\), then

  • a) \(deg({P}(G,t)=n\),
  • b) the coefficient of \(t^n\) is \(1\),
  • c) the coefficient of \(t^{n-1}\) is \(-m\).

3. Main Results

3.1. Chromatic polynomial of Polyaryl Ether dendrimer

Polyaryl ether dendrimer is an important class of commercial polymers. The \(32\) carboxylate groups on dendrimers surface makes it highly soluble in basic aqueous solution, its three dimensional structure is constructed in possessing a central unit or core unit denoted by \(G_{1}(0)\) (see Figure 2). Its branches, also known as added branches which are denoted by the graph \(H\) (see Figure 3) and the end groups which overall has grown to \(n\) number of stages. The graph can be divided into \(4^n\) hexagonal in each step. The graph of polyaryl ether dendrimer is denoted by \(G_{1}(n)\) and shown in the Figure 4.

Figure 2. The graph of polyaryl ether dendrimer nanostar]{The graph of polyaryl ether dendrimer nanostar \(G_{1}(0)\) 

Figure 3. The graph of polyaryl ether dendrimer nanostar]{The graph of added branch of polyaryl ether dendrimer nanostar denoted by \(H\).

Figure 4. The graph of polyaryl ether dendrimer]{The graph of polyaryl ether dendrimer \(G_{1}(2)\)

The following two theorems presents the formula for chromatic polynomial of the \(G_{1}(0)\) and \(G_{1}(n)\).

Theorem 4. The chromatic polynomial of the polyaryl ether dendrimer \(G_{1}(0)\) is $${P}(G_{1}(0),t)=t(t-1)^23(t^4-5t^3+10t^2-10t+5)^4.$$

Proof. By applying Theorem 2 and Equation (1), we get $${P}(G_{1}(0),t)=\frac{(t-1)^{19}({P}(C_{6},t))^4}{t^3}=t(t-1)^{23}(t^4-5t^3+10t^2-10t+5)^4.$$

Theorem 5. For \(n \geq 0 \), the chromatic polynomial of \(G_{1}(n)\) is as: $${P}(G_{1}(n),t)=t(t-1)^{28.2^n-5}(t^4-5t^3+10t^2-10t+5)2^{n+2}.$$

Proof. The proof is constructed by induction on \(n\). Since result is true for \(n=0\) by Theorem 4. Let us suppose that the result is true for any values less than \(n\) and we prove it for \(n\). The chromatic polynomial of core of polyaryl ether dendrimer graph is $${P}(G_{1}(0),t)=t(t-1)^{23}(t^4-5t^3+10t^2-10t+5)^4.$$ The chromatic polynomial of the graph \(H\) is computed by applying Theorem 2 and Equation (1). $${P}(H,t)=(t-1)^6{P}(C_{6},t)={P}(H,t)=t(t-1)^7(t^4-5t^3+10t^2-10t+5),$$ For \(n \geq 1\), the graph \(G_{1}(n)\) is obtained from \(G_{1}(n-1)\) by adding \(4(2)^{n-1}\) copies of the graph \(H\) such that each copy of the graph \(H\) has vertex in common with the graph \(G_{1}(n-1).\) Therefore by Theorem 2 we get, $${P}(G_{1}(m),t)=\frac{P(G_{1}(m-1),t)\times({P}(H,t))^{4(2^{m-1})}}{t^{4(2^{m-1})}},\,\,\,\,\,\,\,\,1\leq m\leq n.$$ Now by using the backward substitution, we get \begin{eqnarray*}{P}(G_{1}(n),t)&=&\frac{{P}(G_{1}(0),t)\times({P}(H,t))^{4(2^{n}-1))}}{t^{4(2^{n}-1)}}\\ &=&\frac{t(t-1)^{23}(t^4-5t^3+10t^2-10t+5)^4(t(t-1)^7(t^4-5t^3+10t^2-10t+5))^{4(2^n-1)}}{t^{4(2^n-1)}}.\end{eqnarray*} Hence we conclude that $${P}(G_{1}(n),t)=t(t-1)^{28(2^n)-5}(t^4-5t^3+10t^2-10t+5)^{2^{n+2}}.$$

The following result presents the formula for order and size of the Polyaryl ether dendrimer nanostar \(G_{1}(n)\).

Corollary 6. Let \(G_{1}(n)\) be the polyaryl ether dendrimer nanostar. Then

  • a) \(|V(G_{1}(n)|=44(2^n)-4,\)
  • b) \(|EG_{1}(n)|=48(2^n)-5.\)

Proof. a) Using Lemma 3(a), \(deg({P}(G,t))=|V(G)|\) which states that the degree of the chromatic polynomials is equal to the number of vertices in that graph. Since, \(deg({P}(G_{1}(n),t))=44(2^n)-4\) therefore by the Theorem \ref{1bcc}, we get $$|V(G_{1}(n)|=44(2^n)-4.$$ b) Using Lemma 3(c), the coefficient of \(-t^{|V(G)-1|}\) is equal with the number of edges of \(G\). So, Theorem \ref{1bcc} implies that $$|E(G_{1}(n)|=48(2^n)-5.$$

3.2. Chromatic polynomial of Organosilicon dendrimer

The organosilicon dendrimers was first studied and prepared by Nakayama and Lin. Organosilicon dendrimer graph denoted by \(G[n]\) which is different by its construction consisting of three major parts, the core unit denoted by \(G[1]\) (Figure 5), added branches which is denoted by the graph \(H\) and the end groups which overall has grown to \(n\) stages. Let \(H_{1}\) is the graph obtained by vertex gluing of \(C_{5}\) and \(K_{2}.\) The graph \(H\) is obtained by vertex gluing of \(3\) copies of \(H_{1}\) and path \(P_{2}.\) The graph can be divided into \(6(3^{n-1})-2\) pentagons in each stage. The graph \(H\) and \(G[2]\) is shown in Figures 6 and 7.

Figure 5. The graph of organosilicon dendrimer \(G[1]\)

Figure 6. The graph of added branch of organosilicon dendrimer \(H\)

Figure 7. The graph of organosilicon dendrimer \(G[2]\)

The following three theorems presents the formula for chromatic polynomial of the Organosilicon dendrimer \(G[1]\), \(G[2]\) and \(G[n]\).

Theorem 7. The chromatic polynomial of Organosilicon dendrimer \(G[1]\) is $${P}(G[1],t)=t(t-1)^8(t^3-4t^2+6t-4)^4.$$ .

Proof. By applying Theorem 2, we get $${P}(H_{1},t)=\frac{{P}(C_5,t){P}(K_2,t)}{t}=(t-1)((t-1)^5-(t-1))=t(t-1)^2(t^3-4t^2+6t-4).$$ Since, the graph \(G[1]\) is composed of \(4\) copies of the graph \(H_{1}\) such that each copy of the \(H_{1}\) intersect at the common vertex \(t\), where \(t=Si\) from Figure 5. Therefore, from Theorem 2, we have $${P}(G[1],t)=\frac{{P}(H_1,t){P}(H_1,t){P}(H_1,t){P}(H_1,t)}{t.t.t}=\frac{t^4(t-1)^8(t^3-4t^2+6t-4)^4}{t^3},$$ This implies that, $${P}(G[1],t)=t(t-1)^8(t^3-4t^2+6t-4)^4.$$

Theorem 8. For \(n \geq 1\), the chromatic polynomial of \(G[n]\) is $${P}(G[n],t)=t(t-1)^{14(3^{n-1})-6}(t^3-4t^2+6t-4)^{6(3^{n-1})-2}.$$

Proof. The proof is constructed by induction on \(n\). Since the result is true for \(n=1\) by Theorem 7. Let us suppose that the result is true for any values less than \(n\) and we prove it for \(n\). The chromatic polynomial of organosilicon dendrimer for \(G[1]\) is, $${P}(G[1],t)=t(t-1)^8(t^3-4t^2+6t-4)^4.$$ The chromatic polynomial of Organosilicon dendrimer for the graph \(H\) can be computed by applying Theorem 2 and Equation (1), as follows $${P}(H,t)=\frac{(t-1)^4({P}(C_{5},t))^3}{t^2}=t(t-1)^7(t^3-4t^2+6t-4)^3.$$ For \(n \geq 2\), the graph \(G[n]\) is obtained from \(G[n-1]\) by adding \(4(3)^{n-2}\) copies of the graph H such that each copy of the graph \(H\) has vertex in common with the graph \(G[n-1].\) Therefore by Theorem 2, we get $${P}(G[m],t)=\frac{{P}(G[m-1],t)\times({P}(H,t))^{4(3^{m-2})}}{t^{4{(3^{m-2})}}},\,\,\,\,\,\,\,\,\, 2\leq m\leq n.$$ Now by using the backward substitution, we get \begin{eqnarray*}{P}(G[n],t)&=&\frac{{P}(G[1],t)\times({P}(H,t))^{2{(3^{n-1}-1))}}}{t^{2(3^{n-1}-1)}}\\ &=&t(t-1)^8(t^3-4t^2+6t-4)^4((t-1)^7(t^3-4t^2+6t-4))^{2(3^{n-1}-1)}\\ &=&t(t-1)^{14(3^{n-1})-6}(t^3-4t^2+6t-4)^{6\times3^{n-1}-2}.\end{eqnarray*}

The following result presents the formula for order and size of the Organosilicon dendrimer \(G[n]\).

Corollary 9. For the Organosilicon dendrimer \(G[n]\), we have

  • a) \(|V(G[n])|=21+32[3^{n-1}-1],\)
  • b) \(|E(G[n])|=24+38[3^{n-1}-1].\)

Proof. a) Using Lemma 3(a), \(deg({P}(G,t))=|V(G)|\) which states that the degree of the chromatic polynomial is equal to the number of vertices in that graph. Since, \(deg({P}(G,t))=21+32[3^{n-1}-1]\), therefore Theorem 8 implies that $$|V(G[n])|=21+32[3^{n-1}-1].$$ b) Using Lemma 3(c), the coefficient of \(-t^{|V(G)-1|}\) is equal with the number of edges of \(G\). So Theorem 8 implies that $$|E(G[n])=24+38[3^{n-1}-1].$$

3.3. Chromatic polynomial of Nanostar dendrimer

For \( n\geq 1\), the graph of Nanostar dendrimer \(NSC_{5}C_{6}\) denoted by \(G_{2}(n)\) which is different by its construction consisting of major parts core unit, added branches and end groups, which overall has grown to \(n\) stages. First stage consists of a hexagon with two pentagons and for \( n\geq 2\) the new branches \(4(2^{n-2})\) emitting from the very first stage are added and their step wise growth follows a structure of Nanostar dendrimer \(NSC_{5}C_{6}\). The graph of \(G_{2}(2)\) and the graph of added branch denoted by \(H\) is shown in Figures 8 and 9 respectively. Now, we will determine the chromatic polynomial of family of dendrimer nanostar \(NSC_{5}C_{6}\) denoted by \(G_{2}(n).\)

Figure 8. The graph \(G_{2}(2)\) of dendrimer nanostar \(NSC_{5}C_{6}\)

Figure 9. The graph \(H\) of added branch of dendrimer nanostar \(NSC_{5}C_{6}\)

The following two theorems presents the formula for chromatic polynomial of the \(G_{2}(1)\) and \(G_{2}(n)\).

Theorem 10. The chromatic polynomial of Nanostar dendrimer \(NSC_{5}C_{6}\) denoted by \(G_{2}(1)\) is: $${P}(G_{2} (1),t)=t(t-1)^{17}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5).$$

Proof. By applying Theorem 2 and Equation (1), we get $${P}(G_{2}(1),t)=\frac{(t-1)^{14}{P}(C_6,t)({P}(C_5,t))^2}{t^2}=t(t-1)^{17}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5).$$

Theorem 11. For \(n \geq 0\), the chromatic polynomial of \(G_{2}(n)\) is: $${P}(G_{2}(n),t)=t(t-1)^{11(2^{n+1})-27}(t^3-4t^2+6t-4)^{2^{n+1}-2}(t^4-5t^3+10t^2-10t+5)^{2^{n+1}-3}.$$

Proof. The proof is constructed by induction on \(n\). Since the result is true for \(n=1\) by Theorem 10. Let us suppose that the result is true for any values less than \(n\) and we prove it for \(n\). The chromatic polynomial of \(G_{2}(1)\) is $${P}(G_{2}(1),t)=t(t-1)^{17}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5).$$ The chromatic polynomial for the graph \(H\) can be calculated by applying Theorem 2 and Equation (1), we get $${P}(H,t)=\frac{(t-1)^9{P}(C_{5},t){P}(C_{6},t)}{t}=t(t-1)^{11}(t^3-4t^2+6t-4)(t^4-5t^3+10t^2-10t+5).$$ For \(n \geq 2\), the graph \(G_{2}(n)\) is obtained from \(G_{2}(n-1)\) by adding \(4(2)^{n-2}\) copies of the added graph H such that each copy of the graph \(H\) has vertex in common with the graph \(G_{2}(n-1).\) Therefore by Theorem 2, we get $${P}(G_{2}(m),t)=\frac{{P}(G_{2}(m-1),t)\times({P}(H,t))^{4(2^{m-2})}}{t^{4(2^{m-2})}},\,\,\,\,\,\,2\leq m\leq n.$$ Now by using the backward substitution, we get \begin{eqnarray*}{P}(G_{2}(n),t)&=&\frac{{P}(G_{2}(1),t)\times({P}(H,t))^{4(2^{n-1}-1))}}{t^{4(2^{n-1}-1)}}\\ &=&t(t-1)^{17}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)\\ &&\times[(t-1)^{11}(t^3-4t^2+6t-4)(t^4-5t^3+10t^2-10t+5)]^{4(2^{n-1}-1)}.\end{eqnarray*} Hence we conclude that $${P}(G_{2}(n),t)=t(t-1)^{11(2^{n+1})-27}(t^3-4t^2+6t-4)^{2^{n+1}-2}(t^4-5t^3+10t^2-10t+5)^{2^{n+1}-3}.$$

The following result presents the formula for order and size of the Nanostar dendrimer.

Corollary 12, Let \(G_{2}(n)\) be the Nanostar dendrimer \(NSC_{5}C_{6}\). Then we have

  • a) \(|V(G_{2}(n))|=9\times2^{n+2}-44,\)
  • b) \(|E(G_{2}(n))|=9\times2^{n+2}-45.\)

Proof. a) Using Lemma 3(a), \(deg({P}(G,t))=|V(G)|\) which states that the degree of the chromatic polynomial is equal to the number of vertices in that graph. Since, \(deg({P}(G_{2}(n),t))=9\times2^{n+2}-44,\) therefore Theorem 11 implies that, $$|V(G_{2}(n),t)|=9\times2^{n+2}-44.$$ b) Using Lemma 3 (c), the coefficient of \(-t^{|V(G)-1|}\) is equal with the number of edges of \(G\). So Theorem 11 implies that $$|E(G_{2}(n),t)|=9\times2^{n+2}-45.$$

3.4. Chromatic polynomial of Tetrathiafulvalence dendrimer

The graph of Tetrathiafulvalence dendrimer simply denoted by \(TD_{2}[n]\) consisting of core unit, added branches and end groups which has overall grown to \(n\) stages and each stage consists of \(2^{n+3}-6\) pentagons with \(2^{n+3}-4\) hexagons. After the core unit stage, for \(n\geq 1\), \(4(2^{n-1})\) branches are added to every stage of the graph and hence, their stepwise growth follows a structure of the tetrathiafulvalence dendrimer, \(TD_{2}[n]\). The graph of \(TD_{2}[0]\) is the core of graph, the graph of added branch \(H\) and the graph of \(TD_{2}[1]\) is shown in Figure 10, 11 and 12. Now, we will determined the chromatic polynomial of class of dendrimer known as tetrathiafulvalence dendrimer \(TD_{2}[n].\)

Figure 10. The core of tetrathiafulvalence dendrimer \(TD_{2}[0]\).

Figure 11. The added graph \(H\) in each branch of \(TD_{2}[n]\).

Figure 12. The graph of Tetrathiafulvalence dendrimer, \(TD_{2}[1]\).

The following two theorems presents the formula for chromatic polynomial of \(TD_{2}[0]\) and \(TD_{2}[n]\).

Theorem 13. The chromatic polynomial of tetrathiafulvalence dendrimer \(TD_{2}[0]\) is: $${P}(TD_{2}[0]),t )=t(t-1)^{27}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)^4.$$

Proof. By applying Theorem 2 and Equation (1), we get $${P}(TD_{2}[0],t)=\frac{(t-1)^{21}({P}(C_6,t))^4({P}(C_5,t))^2}{t^5}.$$ This implies that $${P}(TD_{2}[0],t))=t(t-1)^{27}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)^4.$$

Theorem 14. For \(n \geq 0\), the chromatic polynomial of \(TD_{2}[n]\) is: $${P}(TD_{2}[n],t)=t(t-1)^{17(2^{n+2})-41}(t^3-4t^2+6t-4)^{2^{n+3}-6}(t^4-5t^3+10t^2-10t+5)^{2^{n+3}-4}.$$

Proof. The proof is constructed by induction on \(n\). Since the result is true for \(n=0\) by Theorem 13. Let us suppose that the result is true for any values less than \(n\) and we prove it for \(n\). The chromatic polynomial of the graph \(TD_{2}[n]\) is $${P}(TD_{2}[0],t)=t(t-1)^{27}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)^4.$$ The chromatic polynomial for graph \(H\) can be calculated by applying Theorem 2 and Equation (1), as follows: $${P}(H,t)=\frac{(t-1)^{13}({P}(C_6,t))^2({P}(C_5,t))^2}{t^3}=t(t-1)^{17}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)^2.$$ For \(n \geq 1\), the graph \(TD_{2}[n]\) is obtained from \(TD_{2}[n-1]\) by adding \(4(2)^{n-1}\) copies of the graph \(H\) such that each copy of the graph \(H\) has vertex in common with the graph \(TD_{2}[n-1].\) Therefore by Theorem 2, we have $${P}(TD_{2}[m],t)=\frac{{P}(TD_{2}[m-1],t)\times({P}(H,t))^{4(2^{m-1})}}{t^{4(2^{m-1})}},\,\,\,\,\,1\leq m\leq n.$$ Now by using the backward substitution, we get \begin{eqnarray*}{P} (TD_{2}[n],t)&=&\frac{{P}(TD_{2}[0],t)\times({P}(H,t))^{4(2^{n}-1)}}{t^{4(2^{n}-1)}}\\ &=&\frac{t(t-1)^{27}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)^4}{t^{4(2^{n-1})}}\\ &&\times\frac{[t(t-1)^{17}(t^3-4t^2+6t-4)^2(t^4-5t^3+10t^2-10t+5)^2]^{4(2^{n-1})}}{t^{4(2^{n-1})}}.\end{eqnarray*} Hence we conclude that $${P}(TD_{2}[n]),t)=t(t-1)^{17(2^{n+2})-41}(t^3-4t^2+6t-4)^{2^{n+3}-6}(t^4-5t^3+10t^2-10t+5)^{2^{n+3}-4}.$$

The following result presents the formula for size and order of Tetrathiafulvalence dendrimer.

Corollary 15. For the Tetrathiafulvalence dendrimer \(TD_{2}[n]\), we have

  • a) \(|V(TD_{2}[n])|=50+124(2^{n}-1),\)
  • b) \(|E(TD_{2}[n])|=5(28(2^n)-17).\)

Proof. a) Using Lemma 3(a), \(deg({P}(G,t))=|V(G)|\) which states that the degree of the chromatic polynomial is equal to the number of vertices in that graph. Since, \(deg({P}(TD_{2}[n],t))=50+124(2^{n}-1),\) therefore Theorem 14 implies that $$|V(TD_{2}[n],t)|=50+124(2^{n}-1).$$ b) Using Lemma 3(c), the coefficient of \(-t^{|V(G)-1|}\) is equal with the number of edges of \(G\). So Theorem 14 implies that $$|E(TD_{2}[n],t)|=5(28(2^n)-17).$$

3.5. Chromatic polynomial of Polyther nanostar dendrimer

The graph of polyther dendrimer \(PD_{3}[n]\) consisting of core unit, added branches and end groups which has overall grown to the \(n\) stages and each stage consists of \(3(2^{n+1}-1)\) hexagons. The new branches \(3(2^{n}-1)\) emitting from the very first stage are added and their step wise growth follows a structure of polyther dendrimer nanostar \(PD_{3}[n]\). The core of polyther dendrimer \(PD_{3}[0]\) and the graph of \(PD_{3}[1]\) and the graph of added branch \(H\) is shown in Figure 13, 14 and 15.

Figure 13. The core of polyther dendrimer \(PD_{3}[0]\).

Figure 14. The graph of \(H\) of polyther dendrimer \(PD_{3}[n]\).

Figure 15. The graph of polyther dendrimer \(PD_{3}[1]\).

Now, we will determine the chromatic polynomial of family of Polyther nanostar dendrimer \(PD_{3}[n]\). The following three theorems presents the chromatic polynomial of the Polyther nanostar dendrimer \(PD_{3}[0]\) and \(PD_{3}[n]\).

Theorem 16. The chromatic polynomial of polyther dendrimer nanostar \(PD_{3}[0]\) is $${P}(PD_{3}[0],t)=t(t-1)^{10}(t^4-5t^3+10t^2-10t+5)^3.$$

Proof. By applying Theorem 2 and Equation (1), we get $${P}(PD_{3}[0],t)=\frac{(t-1)^7({P}(C_6,t))^{3}}{t^2}.$$ This implies that $${P}(PD_{3}[0],t)=t(t-1)^{10}(t^4-5t^3+10t^2-10t+5)^3.$$

Theorem 17. For \(n \geq 0\), the chromatic polynomial of \(PD_{3}[n]\) is: $${P}(PD_{3}[n],t)=t(t-1)^{33(2^n)-23}(t^4-5t^3+10t^2-10t+5)^{3(2^{n+1})-3}.$$

Proof. The proof is constructed by induction on \(n\). Since the result is true for \(n=0\) by Theorem 16. Let us suppose that the result is true for any values less than \(n\) and we prove it for \(n\). The chromatic polynomial of polyther dendrimer for \(PD_{3}[1]\) is $${P}(PD_{3}[0],t)=t(t-1)^{10}(t^4-5t^3+10t^2-10t+5)^3.$$ The chromatic polynomial for the graph \(H\) can be calculated by applying Theorem 2 and Equation (1), we get $${P}(H,t)=\frac{(t-1)^{9}({P}(C_6,t))^2}{t}=t(t-1)^{11}(t^4-5t^3+10t^2-10t+5)^{2}.$$ For \(n \geq 1\), the graph \(PD_{3}[n]\) is obtained from \(PD_{3}[n-1]\) by adding \(3(2)^{n-1}\) copies of the added graph H such that each copy of the graph \(H\) has vertex in common with the graph \(PD_{3}[n-1].\) Therefore by Theorem 2, we get $${P}(PD_{3}[m],t)=\frac{{P}(PD_{3}[m-1],t)\times({P}(H,t))^{3(2^{m-1})}}{t^{3(2^{m-1})}},\,\,\,\,\,1\leq m\leq n.$$ Now by using the backward substitution, we get $${P}(PD_{3}[n],t)=t(t-1)^{10}(t^4-5t^3+10t^2-10t+5)^3)[\frac{(t(t-1)^{11}(t^4-5t^3+10t^2-10t+5)^{2})^{3(2^n-1)}}{t^{3(2^n-1)}}].$$ Hence we conclude that $${P}(PD_{3}[n],t)=t(t-1)^{33(2^n)-23}(t^4-5t^3+10t^2-10t+5)^{3(2^{n+1})-3}.$$

The following result presents the formula for order and size of the polyther dendrimer \(PD_{3}[n]\).

Corollary 18. For the Polyther dendrimer \(PD_{3}[n]\), we have

  • a) \(|V(PD_{3}[n])|=57(2^n)-34,\)
  • b) \(|E(PD_{3}[n])|=63(2^n)-38.\)

Proof. a) Using Lemma 3(a), \(deg({P}(G,t))=|V(G)|,\) which states that the degree of the chromatic polynomial is equal to the number of vertices in that graph. Since, \(deg({P}(PD_{3}[n],t))=57(2^n)-34,\) therefore Theorem 17 implies that $$|V(PD_{3}[n])|=57(2^n)-34.$$ b) Using Lemma 3(c), the coefficient of \(-t^{|V(G)-1|}\) is equal with the number of edges of \(G\). So Theorem 17 implies that $$|E(PD_{3}[n])=63(2^n)-38.$$

4. Conclusion

Dendrimers have wide range of applications in supra-molecular chemistry, particularly in host guest reactions and self-assembly processes. During the past several years, there are many research papers dealing with the study of mathematical and topological properties of certain dendrimer nano-structures in [6, 7, 8, 9, 10]. Currently, Alikhani and Iranmanesh investigated the mathematical properties of the nanostructures and some of their chromatic polynomials [2]. In this paper, we have extended this study and computed the chromatic polynomials of certain dendrimers.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Birkhoff, G. D. (1912). A determinant formula for the number of ways of coloring a map. The Annals of Mathematics, 14(1/4), 42-46. [Google Scholor]
  2. Alikhani, S., & Iranmanesh, M. A. (2010). Chromatic polynomials of some dendrimers. Journal of Computational and Theoretical Nanoscience, 7(11), 2314-2316. [Google Scholor]
  3. Fengming, D., & Khee-meng, K. (2005). Chromatic polynomials and chromaticity of graphs. World Scientific. [Google Scholor]
  4. Zykov, A. A. (1949). On some properties of linear complexes. Matematicheskii sbornik, 66(2), 163-188. [Google Scholor]
  5. Farrell, E. J. (1980). On chromatic coefficients. Discrete Mathematics, 29(3), 257-264. [Google Scholor]
  6. Gutman, I., & Polansky, O. E. (2012). Mathematical concepts in organic chemistry. Springer-Verlag, New York. [Google Scholor]
  7. Hasni, R., Arif, N. E., & Alikhani, S. (2014). Eccentric Connectivity Polynomials of Some Families of Dendrimers. Journal of Computational and Theoretical Nanoscience, 11(2), 450-453. [Google Scholor]
  8. Hayat, S., & Imran, M. (2014). Computation of topological indices of certain networks. Applied Mathematics and Computation, 240, 213-228.[Google Scholor]
  9. Imran, M., Hayat, S., & Shafiq, M. K. (2015). Valency based topological indices of organosilicon dendrimers and cactus chains. Optoelectronics and Advanced Materials-Rapid Communications, 9(May-June 2015), 821-830. [Google Scholor]
  10. Khalifeh, M. H., Yousefi-Azari, H., & Ashrafi, A. R. (2009). The Szeged and Wiener Numbers of Water-Soluble Polyaryl Ether Dendrimer Nanostars. Digest Journal of Nanomaterials & Biostructures (DJNB), 4(1), 63-66. [Google Scholor]
]]>
An extension of Petrović’s inequality for \(h-\)convex (\(h-\)concave) functions in plane https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/an-extension-of-petrovics-inequality-for-h-convex-h-concave-functions-in-plane/ Sat, 30 Nov 2019 17:55:14 +0000 https://old.pisrt.org/?p=3516
OMS-Vol. 3 (2019), Issue 1, pp. 398 - 403 Open Access Full-Text PDF
Wasim Iqbal, Khalid Mahmood Awan, Atiq Ur Rehman, Ghulam Farid
Abstract: In this paper, Petrović's inequality is generalized for \(h-\)convex functions on coordinates with the condition that \(h\) is supermultiplicative. In the case, when \(h\) is submultiplicative, Petrović's inequality is generalized for \(h-\)concave functions. Also particular cases for \(P-\)function, Godunova-Levin functions, \(s-\)Godunova-Levin functions and \(s-\)convex functions has been discussed.
]]>

Open Journal of Mathematical Sciences

An extension of Petrović’s inequality for \(h-\)convex (\(h-\)concave) functions in plane

Wasim Iqbal, Khalid Mahmood Awan, Atiq Ur Rehman\(^1\), Ghulam Farid
COMSATS University Islamabad,Park Road, Tarlai Kalan, Islamabad, Pakistan.; (W.I)
Department of Mathematics, University of Sargodha, Sargodha, Pakistan.; (K.M.A)
COMSATS University Islamabad, Attock Campus, Kamra Road, Attock, Pakistan.; (A.U.R & G.F)
\(^{1}\)Corresponding Author: atiq@mathcity.org

Abstract

In this paper, Petrović’s inequality is generalized for \(h-\)convex functions on coordinates with the condition that \(h\) is supermultiplicative. In the case, when \(h\) is submultiplicative, Petrović’s inequality is generalized for \(h-\)concave functions. Also particular cases for \(P-\)function, Godunova-Levin functions, \(s-\)Godunova-Levin functions and \(s-\)convex functions has been discussed.

Keywords:

Petrović’s inequality, \(h-\)convex functions, \(h-\)concave functions, \(h-\)convex functions on coordinates, \(h-\)concave functions on coordinates.

1. Introduction

Let \(h:[c,d]\to \mathbb{R}\) be a non-negative function and \((0,1)\subseteq [c,d]\). A function \(f :[a,b] \to \mathbb{R}\) is said to be an \(h-\)convex, if \(f\) is non-negative for all \(x,y \in [a,b]\) and \(\alpha\in (0,1),\) one has
\begin{align}\label{1} f(\alpha x +(1-\alpha)y)\geq h(\alpha)f(x)+h(1-\alpha)f(y). \end{align}
(1)
If above inequality is reversed, then \(f\) is said to be \(h\)-concave. The \(h-\)convex function was introduced by Varo\v{s}anec in [1]. This function generalized convex function and many other generalization of convex function like \(s-\)convex function, Godunova-Levin function, \(s-\)Godunova-Levin function and \(P-\)function given in [1, 2, 3].

Remark 1. Particular value of \(h\) in inequality (1) gives us the following results:

  1. \(h(\alpha)=\alpha\) gives the convex functions.
  2. \(h(\alpha)=1\) gives the \(P-\)functions.
  3. \(h(\alpha)=\alpha^s\) and \(\alpha \in (0,1)\) gives the \(s-\)convex functions of second sense.
  4. \(h(\alpha)=\frac{1}{\alpha}\) and \(\alpha \in (0,1)\) gives the Godunova-Levin functions.
  5. \(h(\alpha)=\frac{1}{\alpha^s}\) and \(\alpha \in (0,1)\) gives the \(s-\)Godunova-Levin functions of second sense.
  6. In case of \(h-\)concavity, following results are valid:
  7. \(h(\alpha)=1\) gives the reverse \(P-\)functions.
  8. \(h(\alpha)=\frac{1}{\alpha}\) gives the reverse Godunova-Levin functions.
  9. \(h(\alpha)=\frac{1}{\alpha^s}\) gives the reverse \(s-\)Godunova-Levin functions of second sense.

In [4], Dragomir gave the definition of convex functions on coordinates. Following his idea, the \(h-\)convex on coordinates was introduced by Alomari et al. in [5].

Definition 1. Let \(\Delta=[a_1,b_1]\times[a_2,b_2]\subseteq\mathbb{R}^2\) and \(f:\Delta\to \mathbb{R}\) be a mapping. Define partial mappings

\begin{equation}\label{part-map-01} f_y:[a_1,b_1] \to \mathbb{R} \hbox{ by } {f_y}( u ) = f( u,y ) \end{equation}
(2)
and
\begin{equation}\label{part-map-02} f_x:[a_2,b_2] \to \mathbb{R} \hbox{ by } {f_x}( v ) = f( x,v ). \end{equation}
(3)
Also let interval \([c,d]\) contains \((0,1)\) and \(h:[c,d] \to \mathbb{R}\) be a positive function. A mapping \(f:\Delta \to \mathbb{R}\) is said to be \(h-\)convex (\(h-\)concave) on \(\Delta,\) if the partial mappings defined in (2) and (3) are \(h-\)convex (\(h-\)concave) on \([a,b]\) and \([c,d]\) respectively for all \(y\in [c,d]\) and \(x\in [a,b].\)

Remark 2. From above definition, one can deduce the definitions of those particular cases on coordinates.

In [6](also see [7, p. 154]), Petrović proved the following result, which is known as Petrović's inequality in the literature.

Theorem 2. Suppose that \(\left( {{x_1},...,{x_n}} \right)\) and \(({p_1},...,{p_n})\) be non-negative n-tuples such that \(\sum_{k=1}^{n}p_kx_k\geq x_i\)\;\; for \(i=1,...,n\) and \(\sum_{k=1}^{n}p_kx_k\in[0,a]\). If \(f\) is a convex function on \([0,a]\), then the inequality

\begin{align}\label{0} \sum_{k=1}^{n}p_kf(x_k)\leq f\left(\sum_{k=1}^{n}p_kx_k\right)+\left(\sum_{k=1}^{n}p_k-1\right)f(0) \end{align}
(4)
is valid.

A function \(h:[c,d] \to \mathbb{R}\) is said to be a submultiplicative function if
\begin{align}\label{2} h(xy)\leq h(x)h(y), \end{align}
(5)
for all \(x,y\in [c,d].\) If the above inequality is reversed, then \(h\) is said to be supermultiplicative function. If equality holds in the above inequality, then \(h\) is said to be multiplicative function. By considering \(h\) to be supermultiplicative along with other condition, in the following theorem generalization of Petrović's inequality was proved by Rehman et al. in [8].

Theorem 3. Let \((x_1,...,x_n)\) be non-negative n-tuples and \((p_1,...,p_n)\) be positive n-tuples such that

\begin{equation}\label{sk1} \sum_{k=1}^{n} {p_kx_k}\in[0,a] \text{ and } \sum_{k=1}^{n} {p_k x_k}\geq{x_j} \text{ for each } j=1,...,n. \end{equation}
(6)
Also let \(h:[0,\infty) \to \mathbb{R^+}\) be a supermultiplicative function such that
\begin{align}\label{977} h(\alpha)+h(1-\alpha)\leq 1, \text{ for all }\alpha \in (0,1). \end{align}
(7)
If \(f:[0,\infty) \to \mathbb{R}\) be an \(h-\)convex function on \([0,\infty)\), then
\begin{equation}\label{22} \sum\limits_{j=1}^{n}p_jf(x_j)\leq\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c)}{h\left(\sum\limits_{k=1}^{n}p_kx_k -c\right) }f\left(\sum\limits_{k=1}^{n}p_kx_k\right)+\left( \sum\limits_{j=1}^{n}p_j-\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c)}{h\left(\sum\limits_{k=1}^{n}p_kx_k-c \right) }\right)f(c). \end{equation}
(8)
The following reverse version of above theorem was also proved in [8].

Theorem 4. Let \((x_1,...,x_n)\) be non-negative n-tuples and \((p_1,...,p_n)\) be positive n-tuples and the conditions given in (6) are valid. Also let \(h:[0,a] \to \mathbb{R^+}\) be a submultiplicative function such that

\begin{align}\label{1-9} h(\alpha)+h(1-\alpha)\geq 1, \text{ for all }\alpha \in (0,1). \end{align}
(9)
If \(f:[0,a] \to \mathbb{R}\) be an \(h-\)concave function on \([0,a]\), then reverse of (8) is valid.

In recent years, \(h-\)Convex functions are considered in literature by many researchers and mathematicians, for example, see [1, 3, 5, 9] and references there in. Many authors worked on Petrović's inequality by giving results related to it, for example see [6, 7 10] and it has been generalized for \(m-\)convex functions by Bakula et al. in [11]. In [12], Petrović's inequality was generalized on coordinates by using the definition of convex functions on coordinates. In this paper, Petrović's inequality is generalized for \(h-\)convex functions on coordinates, when \(h\) is supermultiplicative function. When \(h\) is submultiplicative, Petrović's inequality is generalized for \(h-\)concave functions on coordinates.

2. Main results

The following theorem consist the result for generalized Petrović's inequality for \(h-\)convex functions on coordinates.

Theorem 5. Let \((x_1,...,x_n)\) and \((y_1,...,y_n)\) be non-negative n-tuples, \((p_1,...,p_n)\) and \((q_1,...,q_n)\) be positive n-tuples such that

\begin{align}\label{9} \sum_{k=1}^{n} {p_kx_k}\in[0,a], \sum_{k=1}^{n} {p_k x_k}\geq{x_j} \text{ for each } j=1,...,n, \end{align}
(10)
and
\begin{align}\label{91} \sum_{j=1}^{n} {q_jy_j}\in[0,b], \sum_{j=1}^{n} {q_j y_j}\geq{y_i} \text{ for each } i=1,...,n. \end{align}
(11)
Also let \(h:[0,\infty) \to \mathbb{R^+}\) be a supermultiplicative function such that (7) is valid. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) be an \(h-\)convex function on coordinates, then
\begin{eqnarray}\label{Ws} \sum\limits_{k=1}^{n}\sum\limits_{j=1}^{n}p_kq_jf(x_k,y_j)&\leq&\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k-c_1 \right)}\left\{\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2 \right)}f\left(\sum\limits_{k=1}^{n}p_kx_k,\sum\limits_{j=1}^{n}q_jy_j\right)\right.\nonumber\\&& \left.+\left(\sum\limits_{j=1}^{n}q_j-\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2 \right)}\right)f\left(\sum\limits_{k=1}^{n}p_kx_k,c_2\right)\right\}+\left(\sum\limits_{j=1}^{n}p_j-\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k-c_1 \right)}\right)\nonumber\\&& \left\{\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2 \right)}f\left(c_1,\sum\limits_{j=1}^{n}q_jy_j\right)+\left( \sum\limits_{j=1}^{n}q_j-\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2 \right)}\right)f(c_1,c_2) \right\}, \end{eqnarray}
(12)
where \(x_i>c_1, y_j>c_2.\)

Proof. Let \(f_x:[0,a]\to\mathbb{R}\) and \(f_y:[0,b]\to\mathbb{R}\) be mappings such that \(f_x(v)=f(x,v)\) and \(f_y(u)=f(u,y).\) Since \(f\) is coordinated \(h-\)convex on \([0,a]\times [0,b]\), therefore \(f_y\) is \(h-\)convex on \([0,b]\), so by Theorem 3, one has \begin{align*} \sum\limits_{j=1}^{n}p_jf_y(x_j)\leq\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k-c_1 \right) }f_y\left(\sum\limits_{k=1}^{n}p_kx_k\right) +\left( \sum\limits_{j=1}^{n}p_j-\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k-c_1 \right) }\right)f_y(c_1). \end{align*} This is equivalent to \begin{align*} \sum\limits_{j=1}^{n}p_jf(x_j,y)\leq\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k -c_1\right) }f\left(\sum\limits_{k=1}^{n}p_kx_k,y\right)+\left(\sum\limits_{j=1}^{n}p_j-\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k -c_1\right) }\right)f(c_1,y), \end{align*} by setting \(y=y_j,\) we get \begin{align*} \sum\limits_{j=1}^{n}p_jf(x_j,y_j)\leq\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k -c_1\right) }f\left(\sum\limits_{k=1}^{n}p_kx_k,y_j\right) +\left(\sum\limits_{j=1}^{n}p_j-\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k -c_1\right) }\right)f(c_1,y_j). \end{align*} Multiplying above inequality by \(p_j\) and taking sum for \(j=1,...,n,\) one has

\begin{equation}\label{w1} \begin{aligned} \sum\limits_{k=1}^{n}\sum\limits_{j=1}^{n}p_jq_jf(x_j,y_j)&\leq\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k-c_1 \right)}\sum\limits_{k=1}^{n}q_jf\left(\sum\limits_{k=1}^{n}p_kx_k,y_j\right)+\left(\sum\limits_{j=1}^{n}p_j-\frac{\sum\limits_{j=1}^{n}p_jh(x_j-c_1)}{h\left(\sum\limits_{k=1}^{n}p_kx_k -c_1\right)}\right) \sum\limits_{k=1}^{n}q_jf(c_1,y_j). \end{aligned} \end{equation}
(13)
Now again by Theorem 4, one has
\( \sum\limits_{j=1}^{n}q_j f\left(\sum\limits_{k=1}^{n}p_kx_k,y_j\right)\leq\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2\right)}f\left(\sum\limits_{k=1}^{n}p_kx_k,\sum\limits_{j=1}^{n}q_jy_j\right)+\left(\sum\limits_{j=1}^{n}q_j-\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2\right)}\right)f\left(\sum\limits_{k=1}^{n}p_kx_k,c_2\right) \)\\ and \begin{align*} \sum\limits_{j=1}^{n}q_jf\left(c_1,y_j\right)\leq\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k -c_2\right) }f\left(c_1,\sum\limits_{j=1}^{n}q_jy_j\right) +\left(\sum\limits_{j=1}^{n}q_j-\frac{\sum\limits_{j=1}^{n}q_jh(y_j-c_2)}{h\left(\sum\limits_{k=1}^{n}q_ky_k-c_2 \right)}\right)f(c_1,c_2). \end{align*} Putting these values in inequality (13), we get the required result.

In the following theorem, we give the Petrović's inequality for \(h-\)convex functions on coordinates.

Theorem 6. Let the conditions given in Theorem 5 are valid. % Also let \(h:[0,\infty)^2 \to \mathbb{R^+}\) be a supermultiplicative function. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) be an \(h-\)convex function on coordinates, then

\begin{eqnarray}\label{kk} &&\sum\limits_{k=1}^{n}\sum\limits_{j=1}^{n}p_jq_jf(x_j,y_j)\nonumber\\&&\leq\frac{\sum\limits_{j=1}^{n}p_jh(x_j)}{h\left(\sum\limits_{k=1}^{n}p_kx_k \right)}\left\{\frac{\sum\limits_{j=1}^{n}q_jh(y_j)}{h\left(\sum\limits_{k=1}^{n}q_ky_k \right)}f\left(\sum\limits_{k=1}^{n}p_kx_k,\sum\limits_{j=1}^{n}q_jy_j\right)\right. \left.+\left(\sum\limits_{j=1}^{n}q_j-\frac{\sum\limits_{j=1}^{n}q_jh(y_j)}{h\left(\sum\limits_{k=1}^{n}q_ky_k \right)}\right)f\left(\sum\limits_{k=1}^{n}p_kx_k,0\right)\right\}\nonumber\\&&+\left(\sum\limits_{j=1}^{n}p_j-1\right)\left\{\frac{\sum\limits_{j=1}^{n}q_jh(y_j)}{h\left(\sum\limits_{k=1}^{n}q_ky_k \right) }\right. \left.f\left(0,\sum\limits_{j=1}^{n}q_jy_j\right)+\left( \sum\limits_{j=1}^{n}q_j-\frac{\sum\limits_{j=1}^{n}q_jh(y_j)}{h\left(\sum\limits_{k=1}^{n}q_ky_k \right)}\right)f(0,0) \right\}. \end{eqnarray}
(14)

Proof. If we take \(c_1=0=c_2\) in Theorem 5, we get the required result.

In the following corollary, we give the Petrović's inequality for convex functions on coordinates which is given in [12].

Theorem 7. Let the conditions given in Theorem 5 are valid. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) be a convex function on coordinates, then

\begin{eqnarray}\label{ss} \sum\limits_{k=1}^{n}\sum\limits_{j=1}^{n}p_jq_jf(x_j,y_j)&\leq& f\left(\sum\limits_{k=1}^{n}p_kx_k,\sum\limits_{j=1}^{n}q_jy_j\right)+\left(\sum\limits_{j=1}^{n}q_ j-1\right)f\left(\sum\limits_{k=1}^{n}p_kx_k,0\right)\nonumber\\&&+\left(\sum\limits_{j=1}^{n}p_j-1\right)\left\{f\left(0,\sum\limits_{j=1}^{n}q_jy_j\right)+\left( \sum\limits_{j=1}^{n}q_j-1\right)f(0,0) \right\}. \end{eqnarray}
(15)

Proof. If we take \(h(x)=x\) for all \(x\in [0,\infty)\), then it satisfied the condition imposed on \(h\) given in Theorem 5. Hence using this value of \(h\) in above theorem gives the required result.

One can see that the condition on function \(h\) given in (7) restrict us to give Petrović's type inequalities for particular cases of \(h-\)convex functions given in Remark 1. If we consider reverse inequality in (7), then it covers some of particular cases but for \(h-\)concave function. In the following theorem, reverse of (12) has been concluded. The notable thing is the requirements of submultiplicity and reverse of (7) for function \(h\) along with \(h\)-concavity of the function \(f\).

Theorem 8.\label{231} Let \((x_1,...,x_n)\) and \((y_1,...,y_n)\) be non-negative n-tuples, \((p_1,...,p_n)\) and \((q_1,...,q_n)\) be positive \(n-\)tuples such that (10) and (11) are valid. Also let \(h:[0,\infty) \to \mathbb{R^+}\) be a submultiplicative function such that (9) is valid. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) be an \(h-\)concave function on coordinates, then the reverse of inequality (12) holds.

Proof. By using Theorem 4 and following the steps of Theorem 5, one can deduce the required results.

In the following theorem, we give the Petrović's inequality for \(h-\)concave functions on coordinates.

Theorem 9. Let the conditions given in Theorem 8 are valid. Also let \(h:[0,\infty) \to \mathbb{R^+}\) be a submultiplicative function. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) be an \(h-\)concave function on coordinates, then the reverse of inequality (14) is valid.

Proof. If we take \(c_1=0=c_2\) in Theorem 8, we get the required result.

In the following theorem, we give the Petrović's inequality for concave functions on coordinates.

Theorem 10. Let the conditions given in Theorem 8 are valid. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) be a concave function on coordinates, then then the reverse of inequality (15) is valid.

Proof. If we take \(h(x)=x\) and \(c_1=0=c_2\) in Theorem 8, we get the required result.

Theorem 10. Let \((x_1,...,x_n)\) and \((y_1,...,y_n)\) be non-negative n-tuples, \((p_1,...,p_n)\) and \((q_1,...,q_n)\) be positive n-tuples such that (10) and (11) are valid. If \(f:[0,a]\times [0,b] \to \mathbb{R}\) is reverse \(P-\)function on coordinates, then

\begin{equation}\label{reverse-P-function2} \sum\limits_{k=1}^{n}\sum\limits_{j=1}^{n}p_kq_jf(x_k,y_j) \leq \sum_{i=1}^n \sum_{j=1}^{n}p_i q_j \left(\sum\limits_{k=1}^{n}p_kx_k,\sum\limits_{j=1}^{n}q_jy_j\right). \end{equation}
(16)

Remark 3. Consider \(h(x)=\frac{1}{x}\), then \( h(\alpha)+h(1-\alpha)=\frac{1}{\alpha}+\frac{1}{1-\alpha} >1 \text{ for all } \alpha \in (0,1). \) Using above value of \(h\) in Theorem 8 gives Petrović type inequality for reverse Godunova-Levin functions on coordinates.

Remark 4. Let us consider \(H(h)=h(\alpha)+h(1-\alpha)-1, \alpha \in (0,1),\) we take \(g_1(\alpha):=H(\alpha^s)=\alpha^s+(1-\alpha)^s-1 ,\text{ where } s\in(0,1).\) In [8], it has been shown that \(g_1\) is positive by considering different values of \(\alpha\) and \(s\) in interval \((0,1)\), therefore \(h(\alpha)=\alpha^s\) for \(\alpha, s \in (0,1)\) satisfied the conditions of Theorem 8, but it doesn't satisfies the conditions of Theorem 5. Hence the above value of \(h\) in Theorem 8 leads us to the Petrović type inequalities for reverse of \(s-\)Godunova-Levin on coordinates.

Remark 5. Let us consider \(g_2(\alpha):=H\left(\frac{1}{\alpha^s}\right)=\frac{1}{\alpha^s}+\frac{1}{(1-\alpha)^s}-1,\text{ where } s\in(0,1).\) This function is also discussed in [8] and it has been shown that \(g_2\) is positive for different values of \(\alpha\) and \(s\) in \((0,1)\). Thus it satisfied the conditions of Theorem 8, but it doesn't satisfy the conditions of Theorem 5. Hence the above value of \(h\) in Theorem 8 leads us to the Petrović type inequalities for \(s-\)concave function on coordinates.

Acknowledgments

The authors are very grateful to the editor and reviewers for their careful and meticulous reading of the paper. The research work 3rd author is supported by Higher Education Commission of Pakistan under NRPU 2017-18, Project No. 7962.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Varošanec, S. (2007). On h-convexity. Journal of Mathematical Analysis and Applications, 326(1), 303-311. [Google Scholor]
  2. Dragomir, S. S., Pečarić, J., & Persson, L. E. (1995). Some inequalities of Hadamard type. Soochow journal of mathematics, 21(3), 335-341. [Google Scholor]
  3. Házy, A. (2011). Bernstein-Doetsch-type results for h-convex functions. Mathematical Inequalities & Applications, 14(3), 499-508. [Google Scholor]
  4. Dragomir, S. S. (2001). On the Hadamard's inequlality for convex functions on the co-ordinates in a rectangle from the plane. Taiwanese Journal of Mathematics, 775-788. [Google Scholor]
  5. Latif, M. A., & Alomari, M. (2009). On Hadamard-type inequalities for h-convex functions on the co-ordinates. International Journal of Mathematical Analysis, 3(33), 1645-1656. [Google Scholor]
  6. Petrović's, M. (1932). Sur Une Fonctionnelle. Publ. Math. Univ. Belgrade 1, 146-149.
  7. Peajcariaac, J. E., & Tong, Y. L. (1992). Convex functions, partial orderings, and statistical applications. Academic Press. [Google Scholor]
  8. Rehman, A. U., Farid, G., & Mishra, V. N. (2019). Generalized convex function and associated Petrović's inequality. International Journal of Analysis and Applications, 17(1), 122-131.[Google Scholor]
  9. Olbryś, A. (2015). On separation by h-convex functions. Tatra Mountains Mathematical Publications, 62(1), 105-111. [Google Scholor]
  10. Pečarić, J. E. (1983). On the Petrović' inequality for convex functions. Glasnik Matematicki, 18(38), 77-85. [Google Scholor]
  11. Bakula, M. K., Pecaric, J., & Ribicic, M. (2006). Companion inequalities to Jensen’s inequality for \(m\)-convex and \((\alpha,m)\)-convex functions. Journal of Inequalities in Pure & Applied Mathematics, 7(5), 26-35.
  12. Rehman, A. U., Mudessir, M., Fazal, H. T., & Farid, G. (2016). Petrović's inequality on coordinates and related results. Cogent Mathematics & Statistics, 3(1), 1227298. [Google Scholor]
]]>
On smarandachely adjacent vertex total coloring of subcubic graphs https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/on-smarandachely-adjacent-vertex-total-coloring-of-subcubic-graphs/ Sat, 30 Nov 2019 16:10:55 +0000 https://old.pisrt.org/?p=3513
OMS-Vol. 3 (2019), Issue 1, pp. 390 – 397 Open Access Full-Text PDF
Enqiang Zhu, Chanjuan Liu
Abstract: Inspired by the observation that adjacent vertices need possess their own characteristics in terms of total coloring, we study the smarandachely adjacent vertex total coloring (abbreviated as SAVTC) of a graph \(G\), which is a proper total coloring of \(G\) such that for every vertex \(u\) and its every neighbor \(v\), the color-set of \(u\) contains a color not in the color-set of \(v\), where the color-set of a vertex is the set of colors appearing at the vertex or its incident edges. The minimum number of colors required for an SAVTC is denoted by \(\chi_{sat}(G)\). Compared with total coloring, SAVTC would be more likely to be developed for potential applications in practice. For any graph \(G\), it is clear that \(\chi_{sat}(G)\geq \Delta(G)+2\), where \(\Delta(G)\) is the maximum degree of \(G\). We, in this work, analyze this parameter for general subcubic graphs. We prove that \(\chi_{sat}(G)\leq 6\) for every subcubic graph \(G\). Especially, if \(G\) is an outerplanar or claw-free subcubic graph, then \(\chi_{sat}(G)=5\).
]]>

Open Journal of Mathematical Sciences

On smarandachely adjacent vertex total coloring of subcubic graphs

Enqiang Zhu, Chanjuan Liu\(^1\)
Institute of Computing Science and Technology, Guangzhou University, Guangzhou 510006, China.; (E.Z)
School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China.; (C.L)
\(^{1}\)Corresponding Author: chanjuanliu@dlut.edu.cn

Abstract

Inspired by the observation that adjacent vertices need possess their own characteristics in terms of total coloring, we study the smarandachely adjacent vertex total coloring (abbreviated as SAVTC) of a graph \(G\), which is a proper total coloring of \(G\) such that for every vertex \(u\) and its every neighbor \(v\), the color-set of \(u\) contains a color not in the color-set of \(v\), where the color-set of a vertex is the set of colors appearing at the vertex or its incident edges. The minimum number of colors required for an SAVTC is denoted by \(\chi_{sat}(G)\). Compared with total coloring, SAVTC would be more likely to be developed for potential applications in practice. For any graph \(G\), it is clear that \(\chi_{sat}(G)\geq \Delta(G)+2\), where \(\Delta(G)\) is the maximum degree of \(G\). We, in this work, analyze this parameter for general subcubic graphs. We prove that \(\chi_{sat}(G)\leq 6\) for every subcubic graph \(G\). Especially, if \(G\) is an outerplanar or claw-free subcubic graph, then \(\chi_{sat}(G)=5\).

Keywords:

Smarandachely adjacent vertex total coloring, subcubic graphs, outerplane graphs, claw-free.

All graphs considered in this paper are simple and undirected. The terminology and notation used but undefined here can be found in [1]. Let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\). We use \(d_G(v)\) to denote the degree of a vertex \(v\) in \(G\). A vertex \(v\) is called a \(t\)-vertex (\(t^-\)-vertex or \(t^+\)-vertex) of \(G\) if \(d_G(v)\)=\(t\) (\(d_G(v)\leq t\) or \(d_G(v)\geq t\)). We refer to \(t\)-vertices, \(t^-\)-vertices and \(t^+\)-vertices adjacent to \(v\) as \(t\)-neighbors, \(t^-\)-neighbors and \(t^+\)-neighbors of \(v\), respectively. Let \(\Delta(G)\) and \(\delta(G)\) denote the maximum degree and minimum degree of \(G\), respectively. The open neighborhood of \(v\), written as \(N_G(v)\), is defined as the set of vertices adjacent to \(v\) in \(G\), i.e. \(N_G(v) = \{u|uv \in E(G)\}\). For any \(V'\subset V(G)\) and \(E'\in E(G)\), we use \(G-V'\) (resp. \(G-E'\)) to denote the graph obtained from \(G\) by deleting vertices in \(V'\) and their incident edges (resp. by removing edges in \(E'\)). For any integers \(a, b\) with \(a< b\), let \([a, b]\)=\(\{a, a+1, \ldots, b\}\).

We, for convenience, denote by \(T(G)\) the set of vertices and edges of a graph \(G\), i.e. \(T(G)=V(G)\cup E(G)\). Let \(k\) be a positive integer, and \(f\) a mapping from \(T(G)\) to \([1, k]\). If \(f\) satisfies the following coloring conditions:

  • (1) \(f(u)\neq f(v)\) for any \(uv\in E(G)\),
  • (2) \(f(u)\neq f(e)\) for every vertex \(u\) and every edge \(e\) incident with \(u\),
  • (3) \(f(e)\neq f(e')\) for every pair \(e,e'\) of adjacent edges,
then we call \(f\) a proper total \(k\)-coloring of \(G\). For any \(v\in V(G)\), we call \(C_{f}(v)\) the color-set of \(v\) \((\)under \(f\) \()\), which denotes the set of colors of \(v\) and its incident edges under \(f\). Furthermore, let \(\overline{C}_f(x)=[1,k]\setminus C_{f}(x)\). To distinguish the color-sets of two adjacent vertices from the perspective of proper total coloring, Zhang et al.[2] introduced the concept of adjacent vertex distinguishing total coloring (or AVDTC simply), which is a proper total coloring \(f\) with the constraint (4) as follows:
  • (4)] \(C_{f}(u)\neq C_{f}(v)\) for every \(uv\in E(G)\).

The minimum number \(k\) such that \(G\) has a \(k\)-AVDTC is the adjacent vertex distinguishing total chromatic number of \(G\), denoted by \(\chi_{at}(G)\). As for this parameter, a famous conjecture says that every graph \(G\) has an adjacent vertex distinguishing total coloring using at most \(\Delta(G)+3\) colors, i.e. \(\chi_{at}(G)\leq \Delta(G)+3\). This conjecture has been confirmed for special families of graphs, e.g. graphs with maximum degree 3 [3, 4, 5], graphs without \(K_4\)-minor [6], graphs with smaller maximum average degree and large maximum degree [7, 8], outerplane graphs [9], 2-degenerate graphs [10], graphs with maximum degree 4 [11], generalized Mycielski graphs [12], etc. In [13], a stronger version of AVDTC called smarandachely adjacent vertex total coloring (abbreviate to SAVTC) is studied. A \(k\)-SAVTC of a graph \(G\) is a proper total \(k\)-coloring that satisfies coloring condition (5) as below:

  • (5)] \(C_{f}(u)\setminus C_{f}(v)\neq \emptyset\) and \(C_{f}(v)\setminus C_{f}(u)\neq \emptyset\) for every \(uv\in E(G)\).

We refer to the smallest number \(k\) such that \(G\) has a \(k\)-SAVTC as the smarandachely adjacent vertex total chromatic number of \(G\), denoted by \(\chi_{sat}(G)\). Clearly, condition (5) is a stronger version of condition (4). That is, if \(f\) is a \(k\)-SAVTC of \(G\), then \(f\) is a \(k\)-AVDTC of \(G\), whereas the converse is not necessarily true. For example, when a graph \(G\) contains no adjacent vertices with maximum degree, it is possible that \(\chi_{at}(G)=\Delta(G)+1\), e.g. the star graph \(S_n, n\geq 3\). However, by coloring condition (5), one can readily check that \(\chi_{sat}(G)\geq \Delta(G)+2\) for all graphs \(G\).

Therefore, such a parameter is independent, interesting and meaningful. In [13], Zhang proposed the following conjecture.

Conjecture 1.[13] For any graph \(G\), \(\chi_{sat}(G)\leq \Delta(G)+3\).

Observe that for two adjacent vertices \(u,v\in V(G)\) such that \(d_G(u)\leq d_G(v)\), to check that \(C_f(u)\) and \(C_f(v)\) satisfy the coloring condition (5) under a total coloring \(f\) of \(G\), it is sufficient to examine whether there is an element \(c\) such that \(c\in C_f(u)\) and \(c\notin C_f(v)\). Therefore, we have the following lemma, which demonstrates the relation between \(\chi_{at}(G)\) and \(\chi_{sat}(G)\) for regular graphs \(G\).

Lemma 2. Let \(G\) be a regular graph. Then, \(\chi_{at}(G)=\chi_{sat}(G)\).

To verify our results in this paper, we first introduce a simple but useful lemma as follows.

Lemma 3. Let \(A\), \(B\) be two sets containing \(p\) and \(q\) elements, respectively. If \(p\leq q-1\) and \(A\setminus B\neq \emptyset\), then for any element \(c\), \((A\cup \{c\})\setminus B\neq \emptyset\) and \(B\setminus (A\cup \{c\})\neq \emptyset\).

Proof. Since \(A\setminus B\neq \emptyset\), there exists an element \(a \in A\) and \(a\notin B\). In addition, \(q\geq p+1\) implies that \(B\) contains at least two distinct elements \(b_1,b_2\) such that \(b_1\notin A\) and \(b_2\notin A\). Therefore, \(|B\setminus(A\cup \{c\})|\geq 1\) and \(|(A\cup \{c\})\setminus B|\geq 1\).

If a graph contains a 1-vertex, then we have the following observation with regard to the SAVTC.

Lemma 4. Suppose that \(G\) is a graph with an 1-vertex \(u\) such that \(G-\{u\}\) has a \(k\)-SAVTC, \(k\geq \Delta(G)+2\). Let \(\{v\}=N_G(u)\), \(d_G(v)=\ell (\geq 2)\), and \(N\) the set of \((\ell-1)^-\)-neighbors of \(v\). If \(|N|\leq k-\ell\), then every \(k\)-SAVTC \(f\) of \(G-\{u\}\) can be extended to a \(k\)-SAVTC of \(G\).

Proof. Based on \(f\), edge \(uv\) has \(k-\ell\) available colors under the coloring conditions (1), (2) and (3). Because \(|N|\leq k-\ell\), there exists an available color \(\alpha\in [1,k]\) for \(uv\) such that \(C_{f}(v')\not \subset (C_{f}(v)\cup \{\alpha\})\) for any \(v'\in N\setminus \{u\}\). By Lemma 3, we have that \((C_{f}(v)\cup \{\alpha\}) \not \subset C_{f}(v')\) for any \(v'\in N_G(v)\setminus N\). Therefore, we obtain a \(k\)-SAVTC of \(G\) after coloring \(u\) with a color in \([1,k]\setminus (C_{f}(v)\cup \{\alpha\})\).

2. Subcubic graphs

A graph \(G\) is said to be cubic if \(\delta(G)=\Delta(G)=3\) and subcubic if \(\Delta(G)\leq 3\). Since \(\chi_{at}(G)\leq 6\) for every cubic graph \(G\) [3, 4, 5], it has that \(\chi_{sat}(G)\leq 6\) by Lemma 2. In this section, we aim to extend this result from cubic graphs to subcubic graphs. We prove the following theorem.

Theorem 5. If \(G\) is a subcubic graph, then \(\chi_{sat}(G)\leq 6\).

Proof. It is sufficient to deal with the case that \(G\) contains a \(2^-\)-vertex. Let \(G\) be a counterexample to Theorem 5 such that \(|E(G)|\) is minimum, and \(v\) a \(2^-\)-vertex. We will prove that \(G\) contains a 6-SAVTC, and get a contradiction. %Since \(G-\{v\}\) is subcubic, by the minimality \(G-\{v\}\) has a 6-SAVTC. If \(d_G(v)=1\), then by Lemma 4 we can get a 6-SAVTC of \(G\) from any 6-SAVTC of \(G-\{v\}\). Therefore, we assume \(d_{G}(v)=2\), and suppose that \(G\) does not contain any 1-vertex. Let \(\{u,w\}\) be the open neighborhood of \(v\).

Case 1. At least one of these two vertices, say \(u\), is a 2-vertex. Let \(u'=N_G(u)\setminus \{v\}\), and by the minimality \(f'\) a 6-SAVTC of \(G-\{vu\}\). We now extend \(f'\) by the following rule: if \((C_{f'}(u)\cup C_{f'}(w))\neq [1,6]\), let \(\alpha\in [1,6]\setminus (C_{f'}(u)\cup C_{f'}(w))\), and assign color \(\alpha\) to \(uv\) and recolor \(v\) with a color in \([1,6]\setminus (\{\alpha, f'(vw),f'(w)\}\cup C_{f'}(u))\) (observe that \(|C_{f'}(u)|=2\)). Denote by \(f\) the resulting coloring. Obviously, \(\alpha\notin C_f(w)\), \(f(v)\notin C_f(u)\), and by Lemma 3 \(C_f(u)\not \subset C_f(u')\). This shows that \(f\) is a 6-SAVTC of \(G\); if \((C_{f'}(u)\cup C_{f'}(w))=[1,6]\), then \(C_{f'}(u)\cap C_{f'}(w)=\emptyset\). We therefore obtain a 6-SAVTC of \(G\) by coloring \(uv\) with \(f'(w)\) and recoloring \(v\) with \(f'(uu')\).

Case 2. Both \(u\) and \(w\) are 3-vertices. When \(uw\in E(G)\), we will extend a 6-SAVTC \(f'\) of \(G-\{uv\}\) to a such coloring of \(G\). Let \(\{u'\}=N_G(u)\setminus \{v,w\}\). Observe that \(|C_{f'}(u)\cup \{f'(vw)\}|\leq 4\). We can choose a color \(\alpha\in [1,6]\setminus (C_{f'}(u)\cup \{f'(vw)\})\) such that \(C_{f'}(u')\not\subset (C_{f'}(u)\cup \{\alpha\})\). By Lemma 3, \((C_{f'}(u)\cup \{\alpha\})\not \subset C_{f'}(w)\). If \(|C_{f'}(u)\cup C_{f'}(w)\cup \{\alpha\}|\neq 6\), then we can recolor \(v\) with a color in \([1,6]\setminus (C_{f'}(u)\cup C_{f'}(w)\cup \{\alpha\})\) to get a 6-SAVTC of \(G\). If \(|C_{f'}(u)\cup C_{f'}(w)\cup \{\alpha\}|=6\), then either \(\alpha\notin C_{f'}(w)\) and \(f'(vw)\notin C_{f'}(u)\), or \(\alpha\in C_{f'}(w)\) and \(f'(vw)\notin C_{f'}(u)\) (or \(\alpha\notin C_{f'}(w)\) and \(f'(vw)\in C_{f'}(u)\)). In the former case, we can recolor \(v\) with a color in \([1,6]\setminus \{\alpha, f'(u),f'(vw), f'(w)\}\) to get a 6-SAVTC of \(G\), while in the latter eventuality we can recolor \(v\) with a color in \([1,6]\setminus (C_{f'}(w)\cup \{f'(u)\})\) (or \([1,6]\setminus (C_{f'}(u)\cup \{\alpha, f'(w)\})\)) to gain a 6-SAVTC of \(G\). In the remainder of this proof, let \(uw\notin E(G)\).

Consider \(G'=(G-\{v\}) \cup \{uw\}\). We see that \(G'\) is a subcubic graph and \(|E(G')|< |E(G)|\). By the minimality, \(G'\) has a 6-SAVTC \(f'\). Without loss of generality, we may suppose that \(C_{f'}(u)=\{1,2,3,4\}\), where \(f'(u)=4, f'(uw)=1\). Let \(N_{G}(u)=\{v,u_1,u_2\}\) and \(N_{G}(w)=\{v,w_1,w_2\}\). Suppose that \(\overline{C}_{f'}(w)=\{c_1,c_2\}\). We now extend \(f'\) to a 6-SAVTC of \(G\) by addressing the following two situations.

When \(1\notin \{f'(u_i), f'(w_i)|i=1,2\}\), we recolor \(u\) and \(w\) with 1, color \(uv\) with 4 and \(vw\) with \(f'(w)\). Denote by \(f\) the resulting coloring. Clearly, \(C_f(u)=C_{f'}(u)\) and \(C_f(w)=C_{f'}(w)\). If \(f'(w)\in \{5,6\}\), then \(f'(w)\notin C_f(u)\). Therefore, after coloring \(v\) with a color in \(\{c_1,c_2\}\setminus \{4\}\), we get a 6-SAVTC of \(G\). If \(f'(w)\notin \{5,6\}\), it has that \(f'(w)\in \{2,3\}\). Then, when \(4\in \{c_1,c_2\}\), we can color \(v\) with 5 or 6 to get a 6-SAVTC of \(G\); when \(4\notin \{c_1,c_2\}\), it follows that \(\{5,6\}\cap \{c_1,c_2\}\neq \emptyset\). Therefore, we obtain a 6-SAVTC of \(G\) by coloring \(v\) with a color in \(\{5,6\}\cap \{c_1,c_2\}\).

When \(1\in \{f'(u_i), f'(w_i)|i=1,2\}\), we, by symmetry, assume that \(f'(u_1)=1\). In this case, we first color \(vw\) with 1, and \(vu\) with a color \(c\in \{5,6\}\) such that \(C_{f'}(u_2)\not\subset \{2,3,4,c\}\). Then, color \(v\) with a color in \(\{c_1,c_2\}\setminus \{4,c\}\) if \(\{4,c\}\neq \{c_1,c_2\}\) (observe that \(f'(w)\notin \{c_1,c_2\}\)); otherwise, color \(v\) with a color in \(\{2,3\}\setminus \{f'(w)\}\). Denote by \(f\) the resulting coloring. It has that \(C_f(w)=C_{f'}(w)\), \(C_f(u)=\{2,3,4,c\}\), \(1\in C_f(v), 1\in C_f(u_1), 1\notin C_f(u)\), and \(f(v)\notin C_f(w)\) or \(c\notin C_f(w)\). Therefore, \(f\) is a 6-SAVTC of \(G\).

3. Graphs with smarandachely adjacent vertex total chromatic number 5

In this section, we aim to construct a 5-SAVTC for the given classes of subcubic graphs. For this, we will get a 5-SAVTC of a hypothetical smallest counterexample \(G\) to the theorem we need prove by extending a 5-SAVTC \(f'\) of a smaller graph derived from \(G\), and obtain a contradiction. In the process of extending \(f'\), we by default color elements shared by \(G\) and \(G'\) with the restriction of \(f'\) to them if there is no specified note.

3.1. Outerplanar graphs with maximum degree 3

A planar graph \(G\) is called outerplanar if there is an embedding of \(G\) into the Euclidean plane such that all vertices lie on the boundary of its unbounded face. An outerplanar graph equipped with such an embedding is called an outerplane graph. To show that outerplane graphs with maximum degree 3 have a 5-SAVTC, we need the following lemma.

Lemma 6. Suppose that \(f\) is a partial coloring of the graph \(G\) shown in Figure 1 (a), where \(V(G)=\{v,x,y,x_1,y_1\}\) and \(f(x_1)=c_1,f(y_1)=c_2,f(x_1x)=c_3,f(y_1y)=c_4\). If \(|\{c_i|i=1,2,3,4\}|\geq 3\), \(c_1\neq c_2, c_1\neq c_3, c_2\neq c_4\) and \(c_3\neq c_4\), then we can construct a 5-SAVTC of \(G\) on the restriction of \(f\).

Proof. In Figures 1 (b) and (c), we give the corresponding 5-SAVTCs of \(G\) for the cases \(|\{c_i|i=1,2,3,4\}|=3\) and \(|\{c_i|i=1,2,3,4\}|=4\), respectively. Observe that under each 5-SAVTC of \(G\), \(c_1\) and \(c_2\) is not in the color-set of \(x\) and \(y\), respectively, and \(c_1,c_2\) belong to the color-set of \(v\).

Figure 1. A graph and its certain colorings

Theorem 7. Let \(G\) be an outerplane graph with maximum degree 3. Then, \(\chi_{sat}(G)=5\).

Proof. It is enough to show that \(G\) has a 5-SAVTC. Let \(G\) be a counterexample to Theorem 7 with minimum number of edges. We distinguish two cases. Case 1. \(G\) contains a cut-vertex \(v\). Then, there are two smaller outerplane graphs \(G_1\) and \(G_2\) such that \(\Delta(G_i)\leq 3, i=1,2\), \(G_1\cup G_2=G\) and \(G_1\cap G_2=\{v\}\). By the minimality, \(G_i\) has a 5-SAVTC, denoted by \(f_i,i=1,2\). Without loss of generality, suppose that \(d_{G_1}(v)\leq 2\), \(d_{G_2}(v)=1\) and \(N_{G_2}(v)=\{u\}\).

Case 1.1. \(|V(G_2)|=2\), i.e. \(G_2=vu\). If \(d_{G_1}(v)=1\), then by Lemma 4 any 5-SAVTC of \(G_1\) can be extended to a 5-SAVTC of \(G\). We therefore assume that \(d_{G_1}(v)=2\) and \(N_{G_1}(v)=\{v_1,v_2\}\). If \(\{v_1,v_2\}\) contains an 1-vertex or 3-vertex, say \(v_1\), we extend \(f_1\) to a 5-SAVTC \(f\) by coloring \(vu\) with a color \(\alpha\in [1,5]\setminus C_{f_1}(v)\) such that \(C_{f_1}(v_2)\not \subset C_{f_1}(v)\cup \{\alpha\}\) (since \(|[1,5]\setminus C_{f_1}(v)|=2\), such a color \(\alpha\) does exist), and color \(u\) (or recolor \(v_1\) when \(d_{G_1}(v_1)=1\)) with \(\beta\), where \(\{\beta\}=[1,5]\setminus (C_{f_1}(v)\cup \{\alpha\})\). Since \(\beta\notin C_f(v)\) and by Lemma 3 \(C_f(v)\neq C_f(v_1)\) when \(d_{G_1}(v_1)=3\), \(f\) is a 5-SAVTC of \(G\).

Now, suppose that \(d_{G_1}(v_1)=d_{G_1}(v_2)=2\) and \(N_{G_1}(v_i)=\{v,v'_i\}\), \(i=1,2\). We omit the trivial case \(v_1v_2\in E(G_1)\) and by Lemma 4 assume \(d_{G'}(v'_i)\geq 2\) for \(i=1,2\). By the minimality, let \(f'\) be a 5-SAVTC of \(G-\{v\}\). Based on \(f'\), if \(C_{f'}(v_1)\cap C_{f'}(v_2)\) contains an element \(\gamma\), then we color \(v, vu, vv_1,vv_2\) with \([1,5]\setminus \{\gamma\}\) properly and color \(u\) with \(\gamma\); if \(C_{f'}(v_1)\cap C_{f'}(v_2)=\emptyset\), we recolor \(v_1\) with a color \(\gamma \in (\{f'(v_2),f'(v_2v'_2)\}\setminus \{f'(v'_1)\})\) and color \(vv_1\) with \(f'(v_1)\), \(vv_2\) with \(f(v_1v'_1)\), \(v\) with the color in \([1,5]\setminus(C_{f'}(v_1)\cup C_{f'}(v_2))\), \(vu\) with \(\{f'(v_2),f'(v_2v'_2)\}\setminus \{\gamma\}\) and \(u\) with \(\gamma\). Denote by \(f\) the resulting coloring. Since \(\gamma\notin C_f(v)\) and by Lemma 3 \(C_f(v_i)\not\subset C_f(v'_i)\) for \(i=1,2\), \(f\) is a 5-SAVTC of \(G\).

Case 1.2. \(|V(G_2)|\geq 3\). Let \(G'_1=G_1\cup vu\), \(G'_2=G_2\). By the minimality, \(G'_1\) and \(G'_2\) have a 5-SAVTC \(f'_1\) and \(f'_2\), respectively. By the color permutation, we assume \(f'_1(v)=f'_2(v)=1, f'_1(vu)=f'_2(vu)=2\) and \(f'_1(u)=f'_2(u)=3\). Clearly, \(3\notin C_{f'_1}(v)\) and \(1\notin C_{f'_2}(u)\). Let \(f=f'_1\cup f'_2\). We see that \(C_f(v)=C_{f'_1}(v)\), \(C_f(u)=C_{f'_2}(u)\) and \(3\in C_f(u), 1\in C_f(v)\). Therefore, \(f\) is a 5-SAVTC of \(G\).

Case 2. \(G\) is 2-connected. We claim that \(G\) does contain two adjacent 2-vertices. If not, suppose that \(u,v\) are such 2-vertices, where \(N_{G}(u)=\{v,u_1\}\) and \(N_{G}(v)=\{u,v_1\}\). Since \(G\) is 2-connected, \(d_{G}(u_1)\geq 2\) and \(d_{G}(v_1)\geq 2\). We first consider the case of \(u_1v_1\in E(G)\). In this case, \(d_{G'}(u_1)=d_{G'}(u_2)=3\). Given a 5-SAVTC \(f'\) of \(G-\{u\}\), for which we suppose that \(\overline{C}_{f'}(v_1)=\{5\}\). Obviously, \(5\in C_{f'}(u_1)\) and \(f'(v)=5\). Based on \(f'\), color \(uu_1\) with a color \(\alpha \in [1,5]\setminus C_{f'}(u_1)\) such that \(C_{f'}(u_2)\not \subset (C_{f'}(u_1)\cup \{\alpha\})\) (since \(|C_{f'}(u_1)|=3\)), where \(\{u_2\}=N_{G}(u_1)\setminus \{v_1,u\}\). Let \(\{\beta\}=[1,5]\setminus (C_{f'}(u_1)\cup \{\alpha\})\), and color \(u\) with \(\beta\) and \(uv\) with a color in \([1,5]\setminus \{\alpha, \beta, f'(vv_1), 5\}\). Observe that \(5\notin \{\alpha, \beta\}\); we obtain a 5-SAVTC of \(G\). Now, we assume that \(u_1v_1\notin E(G)\). Let \(G'=(G-\{u,v\})\cup \{u_1v_1\}\), and, by the minimality, \(f'\) be one of its 5-SAVTCs. Since \(u_1v_1\in E(G')\), there exist \(\alpha_1 \in \overline{C}_{f'}(u_1)\) and \(\alpha_2\in \overline{C}_{f'}(v_1)\) such that \(\alpha_1\neq \alpha_2\). Then, \(f'\) can be extended to a 5-SAVTC of \(G\) by assigning color \(f'(u_1v_1)\) to \(uu_1\) and \(vv_1\), color \(\alpha_1\) to \(u\), color \(\alpha_2\) to \(v\), and a color in \([1,5]\setminus \{f'(u_1v_1),\alpha_1, \alpha_2\}\). This contradicts the assumption of \(G\).

From the foregoing discussion, we deduce that \(G\) contains a triangle \(uvwu\) such that \(d_G(u)=2\) and \(d_G(v)=d_G(w)=3\). Let \(N_G(v)=\{u,w,v'\}\) and \(N_G(w)=\{u,v,w'\}\). Clearly, \(d_{G}(v')\geq 2\) and \(d_{G}(w')\geq 2\). Let \(G'=(G-\{v,w\})\cup \{uv',uw'\}\). By the minimality, \(G'\) has a 5-SAVTC, say \(f'\). Suppose that \(f'(uv')=c_1, f'(uw')=c_2\), \(f'(v')=c_3\) and \(f'(w')=c_4\), where \(c_i \in [1,5]\) for \(i\in[1,4]\). We now define a partial coloring \(g\) of \(G\) such that \(g(x)=f'(x)\) for any \(x\in T(G)\setminus \{u,v,w,uv,uw,vw,vv' ww'\}\), \(g(vv')=f'(uv')=c_1\) and \(g(ww')=f'(uw')=c_2\). We see that \(C_{g}(y)=C_{f'}(y)\) for any \(y\in V(G)\setminus \{u,v,w\}\), and only elements in \(\{u,v,w,uv,uw,vw\}\) are uncolored. Observe that \(c_1\neq c_2\), \(c_1\neq c_3\) and \(c_2\neq c_4\). It suffices to deal with the situation of \(c_3=c_4\) (if \(c_3\neq c_4\), then by Lemma 6 \(g\) can be extended to a 5-SAVTC of \(G\)). Since \(d_{G}(v')\geq 2\), there exists a color \(\alpha \in C_{f'}(v')\) such that \(\alpha \notin \{c_1,c_3\}\). We color \(uv\) with \(c_3\), \(w\) with \(c_1\), \(vw\) with a color \(\beta \in [1,5]\setminus \{c_1,c_2,c_3,\alpha\}\), \(v\) with a color \(\gamma \in [1,5]\setminus \{c_1,c_3,\alpha, \beta\}\), \(uw\) with a color \(\alpha'\in [1,5]\setminus \{c_1,c_3,c_2,\beta\}\), and color \(u\) with \(\alpha\) (when \(\alpha'\neq \alpha\)) or with \(\beta\) (when \(\alpha'=\alpha\)). It is clear from the resulting coloring, say \(f\), that \(c_3\notin C_f(w)\), \(\alpha\notin C_f(v)\) and \(\{\alpha, c_3\}\subset C_f(u)\). Therefore, \(f\) is a 5-SAVTC of \(G\). This completes the proof.

3.2. Claw-free subcubic graphs

A graph is called claw-free if it contains no induced subgraph isomorphic to the complete bipartite graph \(K_{1,3}\). %Therefore, in a a cubic graph, every vertex belongs to a triangle. In this section, We will show that every claw-free subcubic graph has a 5-SAVTC. To see this, we first investigate an interesting class of claw-free subcubic graphs as follows.

We use \(\mathcal{D}\) to denote the family of graphs, in which every one is obtained from a cubic graph such that every vertex is incident with exactly one triangle by subdividing all edges not incident with triangles, where to subdivide an edge \(e\) is to delete \(e\) and add a new vertex \(v\), and join \(v\) to the ends of \(e\). By this definition, we see that for every graph \(G\in \mathcal{D}\), \(\delta(G)=2\), 2-vertices are independent and not incident with any triangle, and the subgraph induced by 3-vertices are the union of vertex-disjoint triangles. We prove that every such graph has a 5-SAVTC.

Theorem 8. For any \(G\in \mathcal{D}\), \(\chi_{sat}(G)=5\).

Proof. \(\chi_{sat}(G)\geq 5\) is obvious. We will give a construction of a 5-SAVTC of \(G\) by applying the famous Hall's theorem on bipartite graphs. Let \(V_2\) and \(V_3\) be the set of 2-vertices and 3-vertices of \(G\), respectively. Then, \(V_2\) is an independent set, and the subgraph induced by \(V_3\) is the union of vertex-disjoint triangles, say \(T_1, T_2, \ldots, T_m\). %Clearly, \(|V_2|\)=\(\frac{3}{2}m\). Now, we construct a bipartite graph \(G'\) with bipartition \((X,Y)\), where \(X=V_2, Y=\{T_1,T_2,\ldots, T_m\}\), and \(xT\in E(G')\) for \(x\in X\), \(T\in Y\), if and only if \(x\) is adjacent to a vertices incident with \(T\) in \(G\). By the definition, we see that \(d_{G'}(x)=2\) for every \(x\in X\) and \(d_{G'}(T)=3\) for every \(T\in Y\). Therefore, by the Hall's theorem, \(G'\) has a matching \(M_1\) which covers every vertex in \(Y\). Consider \(G'-M_1\); we have that \(d_{G'-M_1}(x)\leq 2\) for every \(x\in X\) and \(d_{G'-M_1}(T)=2\) for every \(T\in Y\). Again by the Hall's theorem, \(G'-M_1\) has a matching \(M_2\) which covers every vertex in \(Y\). <.p>

We assert that \(G'-M_1\) contains such a \(M_2\) that covers every 2-vertex in \(X\). If not, select \(M_2\) to be the one that covers the most number of 2-vertices in \(X\). Let \(x\) be a 2-vertex in \(X\) not covered by \(M_2\). Then, there is an \(M_2\)-alternating path \(P\) starting with \(x\) and end with a 1-vertex in \(X\), and \(M_2\vartriangle E(P)\) (the symmetric difference of \(M_2\) and \(E(P)\)) is also a matching of \(G'-M_1\) which covers every vertex in \(Y\) and covers more number of 2-vertices than \(M_2\), a contradiction.

Let \(M_1\) and \(M_2\) be the two matchings selected as above. Then, in \(G'-(M_1\cup M_2)\), \(x\in X\) is a \(1^-\)-vertex and \(T\in Y\) is a \(1\)-vertex. Now, we present an algorithm to construct a 5-SAVTC \(f\) of \(G\) as follows:

  • 1: For each \(T_i\), \(i\in [1,m]\), we use [1,3] to color its vertices and edges, for which the coloring conditions (1), (2) and (3) are satisfied. That is, let \(V(T_i)=\{u_i,v_i,w_i\}\), we color \(u_i,v_i,w_i\) with 1,2,3 respectively, and color \(u_iv_i, v_iw_i, w_iu_i\) with 3,1,2 respectively.
  • 2: Color each edge in \(M_1\) with 4 and in \(M_2\) with 5.
  • 3: Observe that each \(v\in V_2\) is an \(1^-\)-vertex in \(G-(M_1\cup M_2)\). For each edge \(xy\in E(G)\setminus (M_1\cup M_2)\) such that \(x\in V_2\) and \(y\in V_3\), let \(xy'\) be another edge incident with \(x\) in \(G\) and assume \(xy'\) is colored by \(\alpha\). Clearly, \(\alpha\in [4,5]\) since \(xy'\in M_1\cup M_2\). Suppose that \(y\) and \(y'\) are colored with \(\beta\) and \(\beta'\) respectively, \(\beta,\beta'\in [1,3]\). If \(\beta\neq \beta'\), we recolor \(y\) with \(\alpha\), and color \(xy\) with \([4,5]\setminus \{\alpha\}\) and color \(x\) with \(\beta\); if \(\beta=\beta'\), then we recolor the vertices and edges of the triangle \(T\in Y\) incident with \(y\) such that \(\beta\) is not assigned to \(y\) (observe that each triangle \(T_i\) has only one vertex incident with uncolored edges in this step. Therefore, each such triangle \(T\) is recolored at most once. This implies that the recoloring approach does not destroy the coloring before this action). Thus, we can likewise recolor \(y\) with \(\alpha\), and color \(xy\) with \([4,5]\setminus \{\alpha\}\) and \(x\) with the color appearing at \(y\).
  • 4: After the above three steps, only some 2-vertices are not colored (if possible). Let \(x\) be a such uncolored 2-vertex. Suppose that \(N_G(x)=\{x_1,x_2\}\), and let \(\alpha_1\) and \(\alpha_2\) be the colors appearing at \(x_1\) and \(x_2\). We then use the color \([1,3]\setminus \{\alpha_1,\alpha_2\}\) to color \(x\).
According to the above coloring, we see that for each triangle \(T_i\), \(i\in [1,m]\), \(\{\overline{C}_f(u_i)\), \(\overline{C}_f(v_i)\), \(\overline{C}_f(w_i)\}\) = \(\{4,5, \beta\}\) where \(\beta\in [1,3]\); for each 2-vertex \(x\), \(C_f(x)=\{4,5,\beta\}\) and \(\{\overline{C}_f(x_1),\overline{C}_f(x_2)\}\) \(\subset\) \(\{\{4, \beta\}, \{5, \beta\}, \{4, 5\}\}\) where \(\{x_1,x_2\}=N_G(x)\). Therefore, \(f\) is a 5-SAVTC of \(G\).

Theorem 9. Let \(G\) be a claw-free subcubic graph. Then, \(\chi_{sat}(G)=5\).

Proof. Suppose to the contrary that \(G\) is a counterexample to Theorem 9 such that \(E(G)\) is the minimum. It is sufficient to prove that \(G\) has a 5-SAVTC. With a similar proof as that in Theorem 7, we have the following claims.
Claim A. \(G\) is 2-connected, and \(G\) contains no adjacent 2-vertices and triangles incident with a 2-vertex. To round off the proof, we have to deal with some reducible configurations.
Claim B. \(G\) does not contains configurations \(\mathcal{H}_1, \mathcal{H}_2, \mathcal{H}_3\), as shown in Figure 2 (a), (d) and (g).

Figure 2. unavoidable configurations

Proof of the claim B. We will show that each of these configurations is reducible, i.e. \(G\) has a 5-SAVTC if \(G\) contains one of them. Observe that \(K_4\) has a 5-SAVTC. Therefore, we assume that \(G\neq K_4\) in what follows.

Case 1. For \(\mathcal{H}_1\), since \(G\neq K_4\) and \(G\) is claw-free, we, by Claim A, may assume that \(x\neq u_4\), \(y\neq u_1\), \(x\neq y\) and \(xy\neq E(G)\) (if \(x=y\) or \(xy\in E(G)\), then \(G\) is isomorphic to the graph shown in Figure 2 (b) or (c), which has a 5-SAVTC)). Let \(G'=(G-\{u_i|i\in [1,4]\})\cup \{xy\}\). Then, \(G'\) is claw-free and subcubic.

By the minimality, \(G'\) admits a 5-SAVTC, say \(g\). Without loss of generality, assume \(g(x)=1, g(y)=2\) and \(g(xy)=3\). We can extend \(g\) to a 5-SAVTC of \(G\) by coloring elements in \(T(G)\setminus T(G')\) as follows: assign color \(3\) to \(xu_1\), \(u_2u_3\) and \(yu_4\), color \(4\) to \(u_1\) and \(u_2u_4\), color \(5\) to \(u_4\) and \(u_1u_3\), color \(2\) to \(u_3\) and \(u_1u_2\), and color \(1\) to \(u_2\) and \(u_3u_4\).

Case 2. As for \(\mathcal{H}_2\), if \(u_1u_6\in E(G)\), \(x=y\) or \(xy\in E(G)\), then by Claim A \(G\) is isomorphic to the graph shown in Figure 2 \((e)\), \((f)\) or \((g)\), which has a 5-SAVTC. Let \(G'=(G-\{u_i|i\in [1,6]\})\cup \{xy\}\). Then, \(G'\) is claw-free and subcubic. By the minimality \(G'\) has a 5-SAVTC \(g\). We, without loss of generality, assume that \(g(x)=1, g(y)=2\) and \(g(xy)=3\). Now, based on the restriction of \(g\) to \(T(G)\cap T(G')\), we construct \(f\) by letting \(f(xu_1)=f(yu_6)=f(u_2u_3)=f(u_4u_5)=3\), \(f(u_1)=f(u_4)=f(u_3u_5)=2\), \(f(u_3)=f(u_6)=f(u_2u_4)=1\), \(f(u_1u_2)=f(u_5u_6)=4\) and \(f(u_2)=f(u_5)=f(u_1u_3)=f(u_4u_6)=5\). Then, \(C_f(x)=C_g(x)\), \(C_f(y)=C_g(y)\), \(\overline{C}_f(u_1)=\overline{C}_f(u_5)=\{1\}\), \(\overline{C}_f(u_2)=\overline{C}_f(u_6)=\{2\}\), \(\overline{C}_f(u_3)=\overline{C}_f(u_4)=\{4\}\). Hence \(f\) is a 5-SAVTC of \(G\).

Case 3. Consider \(\mathcal{H}_3\). By Claim A, Case 1 and Case 2, we suppose that \(x_1\neq x_2\), \(y_1\neq y_2\), \(x_i\notin \{u_4,u_5\}\), \(y_i\) \(\notin\) \(\{u_2,u_3\}\), \(x_1x_2\notin E(G)\) and \(y_1y_2\notin E(G)\). Let \(G'=(G-\{u_i|i\in [1,6]\})\cup \{x_1x_2,y_1y_2\}\). Obviously, \(G'\) is claw-free subcubic graphs with \(|E(G')|< |E(G)|\). By the choice of \(G\), \(G'\) has a 5-SAVTC \(g\). Without loss of generality, we assume that \(g(x_1)=1, g(x_2)=2, g(x_1x_2)=3, g(y_1)=c_1, g(y_2)=c_2\), and \(g(y_1y_2)=c_3\), where \(c_i\in [1,5]\) for \(i=1,2,3\) and \(c_i\neq c_j\) for \(1\leq i< j\leq 3\). We now construct a 5-SAVTC \(f\) of \(G\) based on the restriction of \(g\) to \(T(G)\cap T(G')\). We first assign color \(c_3\) to \(y_1u_4\) and \(y_2u_5\), and color \(3\) to \(x_1u_2\) and \(x_2u_3\). Clearly, \(C_f(t)=C_{g}(t)\) for any \(t\in \{x_1,x_2,y_1,y_2\}\).

Case 3.1. \(\{c_1,c_2\}\cap \{1,2\}\neq \emptyset\). Then by symmetry we assume that \(c_1=1\). Let \([1,5]=\{c_1,c_2,c_3,c_4,c_5\}\), and set \(f(u_4)=f(u_5u_6)=c_5\), \(f(u_5)=f(u_6u_1)=c_1\), \(f(u_4u_5)=c_4\), \(f(u_4u_6)=c_2\), \(f(u_6)=c_3\), \(f(u_2)=f(u_3u_1)=5\), \(f(u_2u_3)=4\), \(f(u_2u_1)=2\), \(f(u_3)=1\), and finally color \(u_1\) with a color in \(\{3,4\}\setminus \{c_3\}\) when \(c_4\in \{2,5\}\) or with the color \(c_4\) when \(c_4\in \{3,4\}\). According to the definition of \(f\), we have that \(\overline{C}_f(u_2)=\{1\}, \overline{C}_f(u_3)=\{2\}, \overline{C}_f(u_4)=\{c_1\}\), \(\overline{C}_f(u_5)=\{c_2\}\), \(\overline{C}_f(u_6)=\{c_4\}\), \(\{1,2,c_4\}\subset C_f(u_1)\). Thus we obtain a 5-SAVTC of \(G\).

Case 3.2. \(\{c_1,c_2\}\cap \{1,2\}=\emptyset\). When \(c_3\notin \{1,2\}\), or \(c_3\in \{1,2\}\) and \((\{1,2\}\setminus \{c_3\})\in C_f(y_i)\) for some \(i\in \{1,2\}\), we by symmetry assume that \(c_3=1\) when \(c_3\in \{1,2\}\), and suppose that \(2\in C_f(y_1)\) (observe that if \(c_3\notin \{1,2\}\), then since \(d_G(y_i)\geq 2\) for \(i=1,2\), there exists a color, say 2 here, in \(\{1,2\}\cap C_f(y_i)\) for some \(i\in \{1,2\}\)). Let \(\{\alpha\}=\{1,5\}\setminus \{c_1,c_2,c_3,2\}\), and set \(f(u_4)=f(u_5u_6)=\alpha\), \(f(u_5)=f(u_6u_1)=2\), \(f(u_4u_5)=c_1\), \(f(u_4u_6)=c_2\), \(f(u_6)=c_1\), \(f(u_3)=f(u_1u_2)=5\), \(f(u_2u_3)=4\), \(f(u_3u_1)=1\), \(f(u_2)=2\) and color \(u_1\) with \(c_3\) (when \(c_3\notin \{1,5\}\)) or a color in \(\{3,4\}\setminus \{c_1\}\) (when \(c_3\in \{1,5\}\)). Under such coloring \(f\), it follows that \(\overline{C}_f(u_2)=\{1\}, \overline{C}_f(u_3)=\{2\}\), \(\overline{C}_f(u_4)=\{2\}\), \(\overline{C}_f(u_5)=\{c_2\}\), \(\overline{C}_f(u_6)=\{c_3\}\) and \(\{1,2,c_3\}\subset \overline{C}_f(u_1)\), and hence \(f\) is a 5-SAVTC of \(G\).

When \(c_3\in \{1,2\}\) and \((\{1,2\}\setminus \{c_3\})\notin C_f(y_i)\) for \(i=1,2\), it has that \(d_G(y_1)=d_G(y_2)=2\) (otherwise \(C_g(y_1)\subseteq C_g(y_2)\) or \(C_g(y_2)\subseteq C_g(y_1)\)).

Suppose that \(c_3=1\) and let \(N_G(y_1)=\{u_4,y'\}\). Then, \(d_G(y')=3\) and \(y'\) is incident with a triangle. If \(g(y')\neq 1\), we recolor \(y_1\) with 1 and color \(u_4y_1\) with \(c_1\). Let \(\{\alpha\}=\{1,5\}\setminus \{c_1,c_2,1,2\}\), and set \(f(u_4)=f(u_5u_6)=2\), \(f(u_5)=c_1\), \(f(u_4u_5)=f(u_6)=\alpha\), \(f(u_4u_6)=c_2\), \(f(u_2)=f(u_3u_1)=5\), \(f(u_2u_3)=4\), \(f(u_2u_1)=2\), \(f(u_3)=f(u_1u_6)=1\), and finally color \(u_1\) with \(c_1\) (when \(c_1\neq 5\)) or a color in \(\{3,4\}\setminus \{\alpha\}\) (when \(c_1=5\)). Since \(\overline{C}_f(u_2)=\{1\}, \overline{C}_f(u_3)=\{2\}\), \(\overline{C}_f(u_4)=\{1\}\), \(\overline{C}_f(u_5)=\{c_2\}\), \(\overline{C}_f(u_6)=\{c_1\}\) and \(\{1,2,c_1\}\subset \overline{C}_f(u_1)\), \(f\) is a 5-SAVTC of \(G\).

If \(g(y')=1\), then \(c_1\notin C_f(y')\). We recolor \(y_1\) with the color \(\beta\in ([1,5]\setminus \{1,c_2, c_1, g(y_1y')\})\), and color \(u_4y_1\) with \(c_1\). Clearly, \(2\in \{\beta,g(y_1y')\}\). Let \(\{\alpha'\}=\{\beta, g(y_1y')\}\setminus \{2\}\), and set \(f(u_4)=1\), \(f(u_5u_6)=c_1\), \(f(u_5)=2\), \(f(u_4u_5)=f(u_6)=\alpha'\), \(f(u_4u_6)=c_2\), \(f(u_3)=f(u_2u_1)=5\), \(f(u_2u_3)=4\), \(f(u_3u_1)=1\), \(f(u_2)=f(u_1u_6)=2\), and color \(u_1\) with a color in \(\{3,4\}\setminus \{\alpha'\}\). It is easy to see that \(\overline{C}_f(u_2)=\{1\}, \overline{C}_f(u_3)=\{2\}\), \(\overline{C}_f(u_4)=\{2\}\), \(\overline{C}_f(u_5)=\{c_2\}\), \(\overline{C}_f(u_6)=\{1\}\) and \(\{1,2\}\subset \overline{C}_f(u_1)\). Therefore, \(f\) is a 5-SAVTC of \(G\).

By Claims A and B, we see that \(G\) is a 2-connected claw-free subcubic graph which does not contain adjacent 2-vertices, triangles incident with 2-vertices, two triangles sharing a common edge or connecting by an edge (i.e. an edge whose ends incident with two distinct triangles). This indicates that \(G\in \mathcal{D}\), and by Theorem 8 \(G\) has a 5-SAVTC. This completes the proof of the theorem.

4. Remarks

For two graphs \(G\) and \(H\), let \(\sigma: V(G)\rightarrow V(H)\) be a surjection. If for every \(v\in V(G)\), the restriction of \(\sigma\) to the open neighbourhood of \(v\) in \(G\) is a bijection onto the open neighbourhood of \(\sigma(v)\) in \(H\), i.e. \(\sigma(N_G(v))=N_H(\sigma(v))\), then we call \(\sigma\) a covering map from \(G\) to \(H\). If there exists a covering map from \(G\) to \(H\), then \(G\) is called a covering graph of \(H\). As for covering graphs, we have the following conclusion on SAVTC.

Theorem 10. Let \(H\) be a graph containing a \(k\)-SAVTC \(g\). Then, every covering graph \(G\) of \(H\) has a \(k\)-SAVTC.

Proof. Let \(\sigma\) be a covering map from \(G\) to \(H\). We now use \(\sigma\) to lift a proper total \(k\)-coloring, denoted by \(f\), of \(G\), i.e. let \(f(v)=g(\sigma(v))\) for every \(v\in V(G)\) and \(f(uw)=g(\sigma(u)\sigma(w))\) for every \(uw\in E(G)\). According to the definition of covering map, if \(uw\in E(G)\) then \(\sigma(u)\sigma(w)\in E(H)\). We have \(f(u)(=g(\sigma(u)))\neq f(w)(=g(\sigma(w)))\) for every \(uw\in E(G)\), \(f(vu)(=g(\sigma(v)\sigma(u)))\neq f(vw)(=g(\sigma(v)\sigma(w)))\) for any \(vu,vw\in E(G)\), and \(f(v)(=g(\sigma(v))) \neq f(vu)(=g(\sigma(v)\sigma(u)))\) for any \(v\in V(G)\) and \(vu\in E(G)\). This shows that \(f\) is a proper total \(k\)-coloring of \(G\). Moreover, it is easy to see that \(C_f(v)=C_g(\sigma(v))\) for every \(v\in V(G)\). Therefore, \(f\) is a \(k\)-SAVTC of \(G\).

Figure 3. Two graphs with \(5\)-SAVTC

In this paper, we discuss an interesting graph parameter \(\chi_{sat}\), called the smarandachely adjacent vertex total chromatic number. We derive upper bound for subcubic graphs \(G\), i.e. \(\chi_{sat}(G)\leq 6\). We show, in particular, that if \(G\) is an outerplane graph with maximum degree 3 or a claw-free subcubic graph, then \(\chi_{sat}(G)=5\). There are also other classes of subcubic graphs with a 5-SAVTC, e.g., the subcubic bipartite graphs. Indeed, for any bipartite \(G\) with bipartition \((X,Y)\), we can easily give a \((\Delta(G)+2)\)-SAVTC by assigning color 1 to vertices in \(X\), color 2 to vertices in \(Y\), and coloring \(E(G)\) by \([3, \Delta(G)+2]\) (since the edge chromatic number of bipartite graphs is the maximum degree). Additionally, by Theorem 10, we can also address a series of subcubic graphs with a 5-SAVTC, which are non-outplanar and contain a claw, for example, the covering graphs of cube hexahedron or Petersen graph; see Figure 3.

In consideration of our conclusions, we propose the following problem:

Problem 11. Let \(G\) be a subcubic graph. Is it true that \(\chi_{sat}(G)\leq 5\)?

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Bondy, J., & Murty, U. (1976). Graph theory with applications. North-Holland, New York. [Google Scholor]
  2. Zhang, Z., Chen, X., Li, J., Yao, B., Lu, X., & Wang, J. (2005). On the adjacent-vertex-distinguishing total coloring of graphs. Science China Mathematics Series A 48, 289--299. [Google Scholor]
  3. Chen, X. (2008). On the adjacent vertex distinguishing total coloring numbers of graphs with \(\Delta=3\). Discrete Mathematics 308, 4003--4007. [Google Scholor]
  4. Hulgan, H. (2009). Concise proofs for adjacent vertex-distinguishing total colorings. Discrete Mathematics 309, 2548--2550. [Google Scholor]
  5. Wang, H. (2007). On the adjacent vertex distinguishing total chromatic numbers of graphs with \(\Delta=3\). Journal of Combinatorial Optimization 14, 87--109. [Google Scholor]
  6. Wang, W., & Wang, P. (2009). Adjacent vertex distinguising total coloring of \(k_4\)-minor free graphs. Sci China Ser A 39, 1462--1472. [Google Scholor]
  7. Huang, D., & Wang, W. (2012). Adjacent vertex distinguising total coloring of planar grpahs with large maximum degree. Sci Sin Math 42, 151--164. [Google Scholor]
  8. Wang, W., & Wang, Y. (2008). Adjacent vertex distinguising total coloring of planar grpahs with lower average degree. Taiwanese Journal of Mathematics 12, 979--990. [Google Scholor]
  9. Wang, W., & Wang, P. (2010). Adjacent vertex distinguising total colorings of outerplanar graphs. Journal of Combinatorial Optimization 19, 123--133. [Google Scholor]
  10. Miao, Z., Shi, R., Hu, X., & Luo, R. (2016). Adjacent-vertex-distinguishing total coloring of 2-degenerate graphs. Discrete Mathematics 339, 2446--2449. [Google Scholor]
  11. Lu, Y., Li, J., Luo, R., & Miao, Z. (2017). Adjacent vertex distinguishing total coloring of graphs with maximum degree 4. Discrete Mathematics 340, 119--123.[Google Scholor]
  12. Zhu, E., Liu, C., & Xu, J. (2017). On adjacent vertex-distinguishing total chromatic number of generalized mycielski graphs. Taiwanese Journal of Mathematics 21(2), 253--266.[Google Scholor]
  13. Zhang, Z. (2009). Smarandachely adjacent vertex total coloring of graphs. The Scientific report of Lanzhou Jiaotong University, 2--3. [Google Scholor]
]]>
On oscillatory second-order nonlinear delay differential equations of neutral type https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/on-oscillatory-second-order-nonlinear-delay-differential-equations-of-neutral-type/ Sat, 30 Nov 2019 14:52:11 +0000 https://old.pisrt.org/?p=3506
OMS-Vol. 3 (2019), Issue 1, pp. 382 – 389 Open Access Full-Text PDF
Sandra Pinelas, Shyam Sundar Santra
Abstract: In this paper, new sufficient conditions are obtained for oscillation of second-order neutral delay differential equations of the form \(\frac{d}{dt} \Biggl[r(t) \frac{d}{dt} \biggl [x(t)+p(t)x(t-\tau)\biggr]\Biggr]+q(t)G\bigl(x(t-\sigma_1)\bigr)+v(t)H\bigl(x(t-\sigma_2)\bigr)=0, \;\; t \geq t_0,\) under the assumptions \(\int_{0}^{\infty}\frac{d\eta}{r(\eta)}=\infty\) and \(\int_{0}^{\infty}\frac{d\eta}{r(\eta)}<\infty\) for \(|p(t)|<+\infty\). Two illustrative examples are included.
]]>

Open Journal of Mathematical Sciences

On oscillatory second-order nonlinear delay differential equations of neutral type

Sandra Pinelas\(^1\), Shyam Sundar Santra
Academia Militar, Departamento de Ciencias Exactas e Naturais, Av. Conde Castro Guimaraes, 2720-113, Amadora, Portugal.; (S.P)
Department of Mathematics, Sambalpur University, Sambalpur 768019, India.; (S.S.S)
Department of Mathematics, JIS College of Engineering, Kalyani 741235, India.; (S.S.S)
\(^{1}\)Corresponding Author: sandra.pinelas@gmail.com

Abstract

In this paper, new sufficient conditions are obtained for oscillation of second-order neutral delay differential equations of the form \(\frac{d}{dt} \Biggl[r(t) \frac{d}{dt} \biggl [x(t)+p(t)x(t-\tau)\biggr]\Biggr]+q(t)G\bigl(x(t-\sigma_1)\bigr)+v(t)H\bigl(x(t-\sigma_2)\bigr)=0, \;\; t \geq t_0,\) under the assumptions \(\int_{0}^{\infty}\frac{d\eta}{r(\eta)}=\infty\) and \(\int_{0}^{\infty}\frac{d\eta}{r(\eta)}<\infty\) for \(|p(t)|<+\infty\). Two illustrative examples are included.

Keywords:

Oscillation, nonoscillation, nonlinear, delay argument, second-order neutral differential equation.

1. Introduction

This article is concerned with sufficient conditions for oscillation of a nonlinear neutral second-order delay differential equation

\begin{align}\label{1} \frac{d}{dt}\bigl[r(t)\frac{d}{dt}z(t)\bigr]+q(t)G\bigl(x(t-\sigma_1)\bigr)+v(t)H\bigl(x(t-\sigma_2)\bigr)=0, \;\; t \geq t_0, \end{align}
(1)
where \(z(t)=x(t)+p(t)x(t-\tau)\) and \(p \in PC([t_0,\infty),\mathbb{R})\). We also suppose that the following assumptions hold:
  • \((A_1)\)] \(r, q, v \in {C}([t_0, \infty),[0, \infty))\), \(\tau, \sigma_1, \sigma_2 \in \mathbb{R_+}\) and \(\rho=\max\{\tau, \sigma_1, \sigma_2\}\);
  • \((A_2)\)] \(G, H\in C(\mathbb{R},\mathbb{R})\) with \(uH(u)>0\) and \(yG(y)>0\) for \(u, y \neq 0\);
  • \((A_3)\)] \(\int_{0}^{\infty}\frac{d\eta}{r(\eta)}=\infty\);
  • \((A_4)\)] \(\int_{0}^{\infty}\frac{d\eta}{r(\eta)}< \infty\).
Baculikova et al. [1] have considered the second order delay differential equation of the form
\begin{align}\label{bacu1} \frac{d}{dt} \Biggl[r(t) \frac{d}{dt} \biggl [x(t)+p(t)x(\tau(t))\biggr]\Biggr]+q(t)x(\sigma(t))+v(t)x(\eta(t))=0, \end{align}
(2)
where \(r(t), q(t), v(t) \in C([t_0,\infty))\), \(r(t), p(t), \tau(t), \sigma(t), \eta(t) \in C^1([t_0, \infty))\) and established several sufficient conditions for oscillation of solution of (2) for \(0 \leq p(t) < \infty\). Li et al. [2] obtained sufficient conditions for oscillation of solution of second order nonlinear neutral differential equations of the form \[ \frac{d}{dt} \Biggl[r(t) \biggl[\frac{d}{dt} \biggl [x(t)+p(t)x(t-\tau)\biggr]\biggr]^\gamma\Biggr]+q(t)f\big(x(t),x(\sigma(t))\big)=0, \] where \(p, q, r \in C([t_0, +\infty), (0, +\infty))\) and \(\gamma \geq 1\) is the quotient of two odd positive integers. In [3], Santra has consider first-order nonlinear neutral delay differential equations of the form
\begin{equation}\label{sss1} \frac{d}{dt}\bigl[x(t)+p(t)x(t-\tau)\bigr]+ q(t)H\bigl(x(t-\sigma)\bigr)=f(t) \end{equation}
(3)
and
\begin{equation}\label{sss2} \frac{d}{dt}\bigl[x(t)+p(t)x(t-\tau)\bigr]+ q(t)H\bigl(x(t-\sigma)\bigr)=0 \end{equation}
(4)
and studied oscillatory behaviour of the solutions of Equation (3) and Equation (4), under various ranges of \(p(t)\). Also, sufficient conditions are obtained for existence of bounded positive solutions of (3). Tripathy et al. [4] have established several sufficient conditions for the oscillation of solution of the second order nonlinear neutral delay differential equations of the form \[ \frac{d}{dt} \Biggl[r(t) \frac{d}{dt} \biggl [x(t)+p(t)x(\tau(t))\biggr]\Biggr]+q(t)f\big(x(\sigma(t))\big)=0 \] and \[ \frac{d}{dt} \Biggl[r(t) \biggl[\frac{d}{dt} \biggl [x(t)+p(t)x(\tau(t))\biggr]\biggr]^\gamma\Biggr]+q(t)x^{\beta}(\sigma(t))=0, \] where \(r, q, \tau, \sigma \in C(\mathbb{R_+}, \mathbb{R_+})\), \(p \in C(\mathbb{R_+}, \mathbb{R})\) and \(\gamma, \beta\) are quotient of odd positive integers. Motivated by the above work, an attempt is made to study oscillatory behaviour of Equation (1) for \(|p(t)|< +\infty\). Here we are connected with both \((A_3)\) and \((A_4)\).

Neutral functional differential equations have numerous applications in several field of the science as, for example, models of population growth and theory of population dynamics, fractal theory, nonlinear oscillation of earthquake, diffusion in porous media, fractional biological neurons, traffic flow, polymer theology, neural network modeling, fluid dynamics, viscoelastic panel in super sonic gas flow, real system characterized by power laws, electrodynamics of complex medium, sandwich system identification, nuclear reactors mathematical modeling of the diffusion of discrete particles in a turbulent fluid (see [5, 6, 7, 9] and the references cited therein). In last decades several results have been obtained on oscillation of nonneutral differential equations and neutral functional differential equations (see [10, 11, 12, 13, 14, 15] and the references cited therein).

By a solution to Equation (1), we mean a function \(x\in { C}([T_x , \infty), \mathbb{R})\), \(T_x\geq t_0 \), which has the property \(rz'\in { C}^1([T_x , \infty), \mathbb{R})\) and satisfies Equation (1) on the interval \([T_x , \infty )\). We consider only those solutions to Equation (1) which satisfy condition \(\sup\{|x(t)|: t\geq T\}>0\) for all \(T\geq T_x\) and assume that Equation (1) possesses such solutions. A solution of Equation (1) is called oscillatory if it has arbitrarily large zeros on \([T_x, \infty)\); otherwise, it is said to be nonoscillatory. Equation (1) itself is said to be oscillatory if all of its solutions are oscillatory.

Sufficient Conditions for Oscillation

In this section, sufficient conditions are obtained for oscillatory and asymptotic behaviour of second order nonlinear neutral differential equations of the form (1).

Theorem 1. Let \(0\leq p(t)\leq p < 1\), \(t\in \mathbb{R}_+\). Assume that \((A_1)\)--\((A_3)\) hold. Furthermore assume that

  • \((A_5)\)] \(G\) and \(H\) are nondecreasing and odd function
and
  • \((A_6)\)] \(\int_{T}^\infty [q(\eta)+Lv(\eta)]d\eta=\infty\), \(L=\frac{H(\varepsilon)}{G(\varepsilon)}>0\) for \(\varepsilon, T>0\)
hold. Then every solution of the equation (1) is oscillatory.

Proof. Suppose for contrary that \(x(t)\) is a nonoscillatory solution of equation (1). Then there exists \(t_0\geq \rho\) such that \(x(t)>0\) or \(< 0\) for \(t\geq t_0\). Assume that \(x(t)>0\), \(x(t-\tau)>0\) and \(x(t-\sigma)>0\) for \(t\geq t_0\). From Equation (1), it follows that

\begin{align}\label{2} \bigl[r(t)z'(t)\bigr]'=-q(t)G\bigl(x(t-\sigma_1)\bigr)-v(t)H\bigl(x(t-\sigma_2)\bigr)< 0, \end{align}
(5)
hold for \(t\geq t_1>t_0\). Consequently, \(r(t)z'(t)\) is nonincreasing and \(z'(t)\), \(z(t)\) are of constant sign on \([t_2,\infty)\) for \(t_2>t_1\). Let \(r(t)z'(t)< 0\) for \(t \geq t_2\). Then we can find \(\varepsilon_1>0\) and a \(t_3> t_2\) such that \(r(t)z'(t)\leq -\varepsilon_1\) for \(t \geq t_3\). Integrating the relation \(z'(t) \leq - \frac{\varepsilon_1}{r(t)}\) from \(t_3\) to \(t(>t_3)\) and obtain \(z(t) \leq z(t_3)-\varepsilon_1\left[\int_{t_3}^{t} \frac{d\eta}{r(\eta)}\right] \to -\infty\) as \(t\to \infty\), a contradiction to the fact that \(z(t)>0\) for \(t \geq t_1\). Hence, \(r(t)z'(t)>0\) for \(t \geq t_2\). As a result, \(z(t)\) is nondecreasing on \([t_2, \infty)\). So, there exists \(\varepsilon_2>0\) and a \(t_3>t_2\) such that \(z(t) \geq \varepsilon_2\) for \(t \geq t_3\). On the other hand, \(z(t)\) is nondecreasing implies that \begin{align*} (1-p(t))z(t)& \leq z(t)-p(t)z(t-\tau)\\& =x(t)+p(t)x(t-\tau)-p(t)x(t-\tau)-p(t)p(t-\tau)x(t-2\tau) \\ & =x(t)-p(t)p(t-\tau)x(t-2\tau) \leq x(t), \end{align*} that is, \((1-p)\varepsilon_2 \leq x(t)\). Consequently, \(x(t) \geq \varepsilon \) where \((1-p)\varepsilon_2=\varepsilon>0\). Therefore, (5) can be written as \begin{eqnarray*} \bigl(r(t)z'(t)\bigr)'+G(\varepsilon)[q(t)+Lv(t)]\leq 0. \end{eqnarray*} We note that \(\lim_{t\to \infty}r(t)z'(t)\) exists. Integrating the last inequality from \(t_3\) to \(t(>t_3)\), then \begin{align*} G(\varepsilon)\int_{t_3}^t [q(\eta)+Lv(\eta)]d\eta \leq - [r(\eta)z'(\eta)]_{t_3}^{t} < \infty, \;\;as\;\;t\to \infty, \end{align*} a contradiction due to the assumption \((A_{6})\). If \(x(t)< 0\) for \(t \geq t_0\), then we set \(y(t)=-x(t)\) for \(t \geq t_0\) in (1) and using \((A_5)\) we find \begin{align*} \bigl(r(t)(y(t)+p(t)y(t-\tau))'\bigr)' + q(t)G\bigl(y(t-\sigma_1)\bigr)+ v(t)H\bigl(y(t-\sigma_2)\bigr)=0, \end{align*} then proceeding as above, we find a same contradiction. This completes the proof of the theorem.

Theorem 2. Let \(1\leq p(t)\leq p< \infty\), \(t\in \mathbb{R}_+\) and \(G(p) \geq H(p)\). Assume that \((A_1)\)--\((A_3)\) and \((A_5)\) hold. Furthermore assume that there exists \(\lambda, \mu>0\) such that

  • \((A_7)\)] \(G(u)+G(s)\geq \lambda G(u+s)\), \(H(u)+H(s)\geq \mu H(u+s)\) for \(u, s \in \mathbb{R_+}\),
  • \((A_8)\)] \(G(us)\leq G(u)G(s)\), \(H(us)\leq H(u)H(s)\) for \(u, s \in \mathbb{R_+}\)
and
  • \((A_9)\)] \(\int_{T}^{\infty}[Q(\eta)+L_1 V(\eta)]d\eta=\infty\), \(L_1 = \frac{\mu H(\varepsilon)}{\lambda G(\varepsilon)}>0\) for \(T,\varepsilon>0\)
hold, where \(Q(t)=\min\{q(t), q(t-\tau)\}\), \(V(t)=\min\{v(t),v(t-\tau)\}\). Then conclusion of the Theorem 1 is true.

Proof. Let \(x(t)\) be a nonoscillatory solution of Equation (1). Proceeding as in Theorem 1, we have two cases: \(r(t)z'(t)< 0\) and \(r(t)z'(t)>0\) for \(t\in [t_2,\infty)\). The former case follows from Theorem 1. Let's consider the later case. As a result, \(z(t)\) is nondecreasing on \([t_2, \infty)\). So, there exists \(\varepsilon>0\) and a \(t_3>t_2\) such that \(z(t) \geq \varepsilon\) for \(t \geq t_3\). We note that \(\lim_{t\to \infty}r(t)z'(t)\) exists. From Equation (1), it is easy to see that \begin{eqnarray*} 0 & = &\bigl(r(t)z'(t)\bigr)'+q(t)G\bigl(x(t-\sigma_1)\bigr)+v(t)H\bigl(x(t-\sigma_2)\bigr) +G(p)\bigl[\bigl(r(t-\tau)z'(t-\tau)\bigr)'\\&&+q(t-\tau)G\bigl(x(t-\tau-\sigma_i)\bigr)+v(t-\tau)H\bigl(x(t-\tau-\sigma_2)\bigr)\bigr], \end{eqnarray*} in which we use \((A_7)\), \((A_8)\) and \(z(t)\leq x(t)+p x(t-\tau)\) to obtain \begin{eqnarray*} 0 & \geq& \bigl(r(t)z'(t)\bigr)'+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)'+Q(t)\bigl[G(x(t-\sigma_1))+G(px(t-\tau-\sigma_1))\bigr] \\ &&+v(t)H\bigl(x(t-\sigma_2)\bigr) +G(p)v(t-\tau)H\bigl(x(t-\tau-\sigma_2)\bigr)\\ & \geq& \bigl(r(t)z'(t)\bigr)'+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)'+\lambda Q(t)G\bigl[x(t-\sigma_1)+px(t-\tau-\sigma_1)\bigr] \\ &&+v(t)H\bigl(x(t-\sigma_2)\bigr) +G(p)v(t-\tau)H\bigl(x(t-\tau-\sigma_2)\bigr)\\ & \geq& \bigl(r(t)z'(t)\bigr)'+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)'+\lambda Q(t)G\bigl(z(t-\sigma_1)\bigr)+v(t)H\bigl(x(t-\sigma_2)\bigr)\\&&+H(p)v(t-\tau)H\bigl(x(t-\tau-\sigma_2)\bigr), \end{eqnarray*} that is,

\begin{equation}\label{3} \bigl(r(t)z'(t)\bigr)'+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)'+\lambda Q(t)G\bigl(z(t-\sigma_1)\bigr) +\mu V(t)H\bigl(z(t-\sigma_2)\bigr)\leq 0 \end{equation}
(6)
for \(t\geq t_3>t_2\). Consequently, \begin{equation*} \bigl(r(t)z'(t)\bigr)'+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)'+\lambda Q(t)G(\varepsilon) +\mu V(t)H(\varepsilon)\leq 0. \end{equation*} Integrating the last inequality from \(t_3\) to \(t(>t_3)\), then \begin{align*} \lambda G(\varepsilon)\int_{t_3}^t [Q(\eta)+L_1 V(\eta)]d\eta & \leq - \bigl[r(\eta)z'(\eta)\bigr]_{t_3}^{t}+G(p)\bigl[r(\eta-\tau)z'(\eta-\tau)\bigr]_{t_3}^{t} < \infty, \;\;as\;\;t\to \infty, \end{align*} a contradiction due to the assumption \((A_9)\). The case \(x(t)< 0\) is similar. Thus the theorem is proved.

Theorem 3. Let \(-1\leq p(t)\leq0\), \(t\in \mathbb{R}_+\). If \((A_1)\)--\((A_3)\), \((A_5)\) and \((A_6)\) hold, then every unbounded solution of Equation (1) oscillates.

Proof. Let on the contrary that \(x(t)\) be a unbounded solution of Equation (1) on \([t_0,\infty)\), \(t_0>\rho\). Proceeding as in Theorem 1, it concludes that \(r(t)z'(t)\) is nonincreasing and \(z(t)\), \(z'(t)\) are monotonicon \([t_2,\infty)\). Indeed, \(z(t)< 0\) for \(t \geq t_3\) implies that \(x(t) \leq x(t-\tau)\), and hence $$ x(t)\leq x(t-\tau) \leq x(t-2\tau)\leq ...\leq x(t_3),$$ that is, \(x(t)\) is bounded, which is absurd. Hence, \(z(t)>0\) for \(t\geq t_3\). Suppose that \(r(t)z'(t)>0\) for \(t\geq t_3\). Clearly, \(z(t)\leq x(t)\) implies that

\begin{align}\label{4} \bigl(r(t)z'(t)\bigr)'+q(t)G\bigl(z(t-\sigma_1)\bigr)+v(t)H\bigl(z(t-\sigma_2)\bigr)\leq 0 \end{align}
(7)
for \(t\geq t_3\). On the other hand, \(z(t)\) is nondecreasing implies that, there exist \(\varepsilon>0\) and a \(t_4>t_3\) such that \(z(t)\geq \varepsilon\) for \(t\geq t_4\). Consequently, for \(t_5>t_4+\sigma\), it follows from Equation (7) that \begin{eqnarray*} \bigl(r(t)z'(t)\bigl)'+G(\varepsilon)q(t)+H(\varepsilon)v(t)\leq 0, \; t\geq t_5 \end{eqnarray*} Integrating the last inequality from \(t_5\) to t \((>t_5)\), we have \begin{align*} G(\varepsilon) \int_{t_5}^{t}[q(\eta)+Lv(\eta)]d\eta \leq -\bigl[r(s)z'(s)\bigr]_{t_5}^{t} < \infty, \; as \; t \to \infty, \end{align*} a contradiction to \((A_6)\). Hence, \(r(t)z'(t)< 0\) for \(t\geq t_3\). Rest of the theorem follows from Theorem 1. Thus, the proof of the theorem is complete.

Theorem 4. Let \(-1< -p\leq p(t)\leq 0\), \(t\in \mathbb{R}_+\) and \(p>0\). If all the assumptions of Theorem 3 hold, then every solution of Equation (1) either oscillates or converges to zero as \(t\to \infty\).

Proof. Proceeding as in the proof of Theorem 1, we have obtained Equation (5) and hence \(r(t)z'(t)\) is nonincreasing on \([t_2,\infty)\). Therefore, \(z(t)\) is monotonic on \([t_3,\infty)\), \(t_3>t_2\). So we have four cases namely:

  1. \(z(t)>0, \;\;\;\; r(t)z'(t)>0,\)
  2. \(z(t)>0, \;\;\;\; r(t)z'(t)< 0,\)
  3. \(z(t)< 0, \;\;\;\; r(t)z'(t)>0,\)
  4. \(z(t)< 0, \;\;\;\; r(t)z'(t)< 0.\)
Using the arguments as in the proof of Theorems 1 and Theorem 3, we get contradictions to \((A_3)\) and \((A_6)\) when the \textbf{Case (2)} and \textbf{Case (1)} respectively. Since \(z(t)< 0\) implies that \(x(t)\) is bounded, that is, \(z(t)\) is bounded, then the \textbf{Case (4)} is not possible due to Theorem 1 (\(\because\) \(z'(t)< 0\) implies that \(\lim_{t \to \infty}z(t)=-\infty\)). Consequently, the \textbf{Case (3)} holds for \(t\geq t_3\). In this case, \(\lim_{t \to \infty} z(t)\) exits. As a result, \begin{eqnarray*} 0 & \geq & \lim_{t\to\infty}z(t)=\limsup_{t\to \infty}z(t) = \limsup_{t\to \infty} \bigl(x(t)+p(t)\;x(t-\tau)\bigr) \\ & \geq & \limsup_{t\to\infty} \bigl(x(t)- p\;x(t-\tau)\bigr) \\ & \geq & \limsup_{t\to\infty} x(t)+ \liminf_{t\to\infty} \bigl(-px(t-\tau)\bigr) = (1-p) \limsup_{t\to\infty} x(t) \end{eqnarray*} implies that \(\limsup_{t\to \infty} x(t)=0\) \([\because 1-p>0]\) and hence \(\liminf_{t\to \infty} x(t)=0.\) Thus \(\lim_{t\to \infty} x(t)=0\). The case \(x(t)< 0\) is similar dealt with. This completes the proof of the theorem.

Theorem 5. Let \(-\infty < -p_1\leq p(t)\leq-p_2< -1\), \(p_1, p_2>0\) and \(t\in \mathbb{R}_+\). Assume that \((A_1)\)--\((A_3)\), \((A_5)\) and \((A_6)\) hold. If

  • \((A_{10})\)] \(\int_{T}^{\infty}[q(\eta)+L_2 v(\eta)]d\eta=\infty\), \(L_2 = \frac{ H(-p_1 ^{-1} \alpha)}{G(-p_1 ^{-1} \alpha)}>0\) for \(T, p_1>0\) and \(\alpha< 0\),
then every bounded solution of Equation (1) either oscillates or converges to zero as \(t\to \infty\).

Proof. Suppose on the contrary that \(x(t)\) is a solution of Equation (1) which is bounded on \([t_0,\infty)\), \(t_0>\rho\). Using the same type of reasoning as in Theorem 1, we have that \(z'(t)\) and \(z(t)\) are of one sign on \([t_2,\infty)\) and have four possible cases like as in Theorem 4. \textbf{Case (2)} and \textbf{Case (4)} are not possible because of \((A_3)\) and bounded \(z(t)\). \textbf{Case (1)} follows from the proof of the Theorem 3. For the \textbf{Case (3)}, we claim that \(\lim_{t \to \infty} z(t)=0\). If not, there exists \(\alpha< 0\) and \(t_3>t_2\) such that \(z(t+\tau-\sigma_1)< \alpha\) and \(z(t+\tau-\sigma_2)< \alpha\) for \(t\geq t_3\). Hence, \(z(t)\geq p(t)x(t-\tau)\geq -p_1x(t-\tau)\) implies that \(x(t-\sigma_1)\geq -p_1 ^{-1} \alpha >0\) and \(x(t-\sigma_2)\geq -p_1 ^{-1} \alpha >0\) for \(t \geq t_3\). Consequently, Equation (5) becomes \begin{eqnarray}\label{5} \bigl(r(t)z'(t)\bigr)'+G(-p_1 ^{-1} \alpha)q(t)+H(-p_1 ^{-1} \alpha)v(t)\leq0 \end{eqnarray} for \(t\geq t_3\). Integrating the last inequality from \(t_3\) to \(t(>t_3)\), we get \begin{align*} G(-p_1 ^{-1} \alpha)\int_{t_3}^{t}[q(\eta)+L_2 v(\eta)]d\eta \leq -\bigl[r(s)z'(s)\bigr]_{t_3}^{t} < \infty, \; as \; t \to \infty, \end{align*} a contradiction to \((A_{10})\). Ultimately, \(\lim_{t\to\infty}z(t)=0\). Hence, \begin{eqnarray*} 0 & = & \lim_{t\to\infty}z(t)=\liminf\limits_{t\to \infty}z(t)\\ & \leq & \liminf_{t\to\infty} \bigl(x(t)- p_2\;x(t-\tau)\bigr) \\ & \leq & \limsup_{t\to\infty} x(t)+ \liminf\limits_{t\to\infty} \bigl(-p_2\;x(t-\tau)\bigr) \\ & = & (1-p_2) \limsup_{t\to\infty} x(t) \end{eqnarray*} implies that \(\limsup_{t\to \infty}x(t)=0\) \([\because 1-p_2< 0]\). Thus, \(\liminf_{t\to \infty}x(t)=0\) and hence \(\lim_{t\to \infty}x(t)=0.\) Therefore, any solution \(x(t)\) of Equation (1) converges to zero. The case \(x(t)< 0\) is similar. This completes the proof of the theorem.

Remark 1. If we denote \(R(t)=\int_{t}^{\infty}\frac{d\eta}{r(\eta)}\), then \((A_4)\) implies that \(R(t) \to 0\) as \(t \to \infty\), since \(R(t)\) is nonincreasing.

Theorem 6. Let \(0\leq p(t)\leq p< \infty\), \(t\in \mathbb{R}_+\) and \(G(p) \geq H(p)\). Assume that \((A_1)\), \((A_2)\), \((A_4)\), \((A_5)\) and \((A_7)\)--\((A_9)\) hold. If

  • \((A_{11})\)] \(\int_{T}^{\infty}\frac{1}{r(\eta)}\left[\int_{T_1}^{\eta} \bigl\{Q(\zeta)G\bigl(\varepsilon R(\zeta-\sigma_1)\bigr)+L_3V(\zeta)H\bigl(\varepsilon R(\zeta-\sigma_2)\bigr)\bigr\}d\zeta\right] d\eta=\infty\) for \(T, T_1, C>0\),
where \(L_3=\frac{\mu}{\lambda}>0\) then also conclusion of the Theorem 1 is true, where \(Q(t)\) and \(V(t)\) is defined in Theorem 2.

Proof. On the contrary, we proceed as in Theorem 1 to obtain Equation (5) for \(t\geq t_1\) and \(r(t)z'(t)\) is non increasing on \([t_2,\infty)\), \(t_2>t_1\). The case \(r(t)z'(t)>0\) for \(t \geq t_0\) is same as in Theorem 2 and gives a contradiction due to \((A_9)\). Let's suppose that \(r(t)z'(t)< 0,\) for \(t\geq t_2\). Therefore, for \(s\geq t>t_2\), \(r(s)z'(s)\leq r(t)z'(t)\) implies that \begin{align*} z'(s)\leq \frac{r(t)z'(t)}{r(s)}. \end{align*} Consequently, \begin{align*} z(s)\leq z(t)+r(t)z'(t)\int_{t}^{s}\frac{d\theta}{r(\theta)}. \end{align*} Because of \(r(t)z'(t)\) is nonincreasing, we can find a constant \(\varepsilon>0\) such that \(r(t)z'(t)\leq -\varepsilon\) for \(t\geq t_2\). As a result, \(z(s)\leq z(t)-\varepsilon\int_{t}^{s}\frac{d\eta}{r(\eta)}\) and hence \(0\leq z(t)-\varepsilon R(t)\) for \(t\geq t_2\). Using the above fact in Equation (6), we get \begin{align*} \bigl(r(t)z'(t)\bigr)'+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)'+\lambda Q(t)G\bigl(\varepsilon R(t-\sigma_1)\bigr)+\mu V(t)H\bigl(\varepsilon R(t-\sigma_2)\bigr)\leq 0 \end{align*} for \(t\geq t_3>t_2\). Integrating the last inequality from \(t_3\) to \(t(>t_3)\), we obtain \begin{align*} \bigl[r(\eta)z'(\eta)\bigr]_{t_3}^t+G(p)\bigl[r(\eta-\tau) z'(\eta-\tau)\bigr]_{t_3}^t+\lambda \int_{t_3}^t \bigl[Q(\eta)G\bigl(\varepsilon R(\eta-\sigma_1)\bigr)+L_3V(\eta)H\bigl(\varepsilon R(\eta-\sigma_2)\bigr)\bigr]d\eta\leq 0, \end{align*} that is, \begin{eqnarray*} \lambda\int_{t_3}^t \bigl[Q(\eta)G(\varepsilon R(\eta-\sigma_1))+L_3V(\eta)H\bigl(\varepsilon R(\eta-\sigma_2)\bigr)\bigr]d\eta & \leq & -\bigl[r(\eta)z'(\eta)+G(p)\bigl(r(\eta-\tau)z'(\eta-\tau)\bigr)\bigr]_{t_3}^t \\ & \leq & -\bigl[r(t)z'(t)+G(p)\bigl(r(t-\tau)z'(t-\tau)\bigr)\bigr] \\ & \leq & -\bigl(1+G(p)\bigr)r(t)z'(t) \end{eqnarray*} implies that \begin{eqnarray*} \frac{\lambda}{1+G(p)} \frac{1}{r(t)} \int_{t_3}^t \bigl[Q(\eta)G\bigl(\varepsilon R(\eta-\sigma_1)\bigr)+L_3V(\eta)H\bigl(\varepsilon R(\eta-\sigma_2)\bigr)\bigr]d\eta \leq - z'(t). \end{eqnarray*} Again integrating the last inequality, we obtain that \begin{eqnarray*} \frac{\lambda }{1+G(p)} \int_{t_3}^{t}\frac{1}{r(\eta)}\left[\int_{t_3}^\eta \bigl\{Q(\zeta)G\bigl(\varepsilon R(\zeta-\sigma_1)\bigr)+L_3V(\zeta)H\bigl(R(\zeta-\sigma_2))\bigr\}d\zeta\right]d\eta \leq - \bigl[z(\eta)\bigr]_{t_3}^{t}. \end{eqnarray*} Since \(z(t)\) is bounded and monotonic, then it follows that \begin{eqnarray*} \int_{t_3}^{t}\frac{1}{r(\eta)}\left[\int_{t_3}^\eta \bigl\{Q(\zeta)G\bigl(\varepsilon R(\zeta-\sigma_1)\bigr)+L_3V(\zeta)H\bigl( \varepsilon R(\zeta-\sigma_2)\bigr)\bigr\}d\zeta\right]d\eta < \infty, \end{eqnarray*} a contradiction to \((A_{11})\). The case \(x(t)< 0\) is similar dealt with. This completes the proof of the theorem.

Theorem 7. Let \(-1\leq p(t) \leq 0,\) \(t\in \mathbb{R}_+\). Assume that \((A_1)\), \((A_2)\) and \((A_4)\)--\((A_6)\) hold. Furthermore assume that

  • \((A_{12})\)] \(\int_{T}^{\infty}\frac{1}{r(\eta)}\left[\int_{T_1}^{\eta} \bigl\{q(\zeta)G\bigl(\varepsilon R(\zeta-\sigma_1)\bigr)+v(\zeta)H\bigl(\varepsilon R(\zeta-\sigma_2)\bigr)\bigr\}d\zeta\right] d\eta=\infty\) for \(T, T_1, C>0\)
hold. Then conclusion of the Theorem 3 is true.

Proof. The proof of the theorem follows from the proof of the Theorems 3 and 6 and hence the details are omitted.

Theorem 8. Let \(-1< -p\leq p(t)\leq 0\), \(t\in \mathbb{R}_+\) and \(p>0\). If all the conditions of Theorem 7 are satisfied, then conclusion of the Theorem 4 is true.

Proof. The proof of the theorem follows from the proof of Theorems 4 and 7. Hence, the proof of the theorem is complete.

Theorem 9. Let \(-\infty< -p_1\leq p(t)\leq -p_2< -1\), \(t\in \mathbb{R}_+\) and \(p_1, p_2>0\). Assume that \((A_1)\), \((A_2)\), \((A_4)\)--\((A_6)\), \((A_{10})\) and \((A_{12})\) hold. If

  • \((A_{13})\)] \(\int_{T}^{\infty}\frac{1}{r(\eta)} \bigl[\int_{T_1}^{\eta}\bigl\{q(\zeta)+L_2v(\zeta)\bigr\}d\zeta\bigr]d\eta=\infty\) for \(T, T_1>0\),
where \(L_2\) is defined in Theorem 5, then conclusion of the Theorem 5 is true.

Proof. Proceeding as in the proof of the Theorem 5 we have four possible cases for \(t\geq t_2\). First two cases are similar to the proof of Theorem 8.
Case (3) is similar to the proof of Theorem 5. Hence, we consider the Case (4) only. Using the same type of reasoning as in the Case (3) of Theorem 8, we get Equation (8) and hence \begin{align*} H(-p_1 ^{-1} \alpha)\biggl[\int_{t_3}^{t} \bigl\{q(\eta)+L_2v(\eta)\bigr\}d\eta \biggr]\leq -r(t)z'(t). \end{align*} Therefore, \begin{eqnarray*} H(-p_1 ^{-1} \alpha)\int_{t_3}^{t}\frac{1}{r(\eta)} \biggl[\int_{t_3}^{\eta}\bigl\{q(\zeta)+L_2v(\zeta)\bigr\}d\zeta\biggr]d\eta \leq - \bigl[z(\eta)\bigr]_{t_3}^{t} \leq -z(t) < \infty, \;\; as \;\; u \to \infty, \end{eqnarray*} a contradiction to \((A_{13})\). Rest of the theorem follows from the proof of the Theorem 5. This completes the proof of the theorem.

3. Final Comment and Examples

In this section, we will be giving some simple remarks to conclude the paper.

Remark 2. In Theorem 1 Theorem 9, \(G\) and \(H\) is allowed to be linear, sublinear or superlinear. A prototype of the function \(G\) and \(H\) satisfying \((A_2)\), \((A_5)\), \((A_7)\) and \((A_8)\) is

\begin{equation} (1+\alpha|u|^{\beta})|u|^{\gamma} \mathrm{sgn}(u) \quad\text{for}\ u\in \mathbb{R}, \end{equation}
(9)
where \(\alpha\geq1\) or \(\alpha=0\) and \(\beta,\gamma>0\) are reals. For verifying \((A_6)\), we may take help of the well-known inequality (see [16, p. 292])
\begin{equation} u^{p}+v^{p}\geq{}h(p)(u+v)^{p}\quad\text{for}\ u,v>0, \quad\text{where}\quad h(p):= \left\{ \begin{array}{cc} 1,&0\leq{}p\leq1,\\ \dfrac{1}{2^{p-1}},&p\geq1. \end{array} \right.\notag \end{equation}
(10)
We finalize the paper by presenting two examples, which show existence of main results.

Example 1. Consider the differential equation

\begin{equation}\label{exm4eq1} \frac{d}{dt}\Biggl[e^{-4t}\frac{d}{dt}\biggl[x(t)+x(t-\pi)\biggr]\Biggr] +e^t\bigl(x(t-\tfrac{\pi}{2})\bigr)^{3} +e^t\bigl(x(t-\tfrac{3\pi}{2})\bigr)^3=0\quad\text{for}\ t\geq \pi, \end{equation}
(10)
where \(r(t):=e^{-4t}\), \(p(t):\equiv 1\), \(\tau:=\pi\), \(q(t):\equiv e^t\), \(\sigma_{1}:=\tfrac{\pi}{2}\), \(G(u):=u^{3}\), \(v(t):=e^t\), \(\sigma_{2}=\tfrac{3\pi}{2}\) and \(H(u):=u^3\) for \(t\geq \pi\) and \(u\in \mathbb{R}\). All the assumptions of Theorem 1 can be verified. Hence, due to Theorem~1, every solution of Equation (10) oscillates. Clearly \(x(t)=\sin(t)\) for \(t\geq \pi\) is a solution Equation (10).

Example 2. Consider the differential equation

\begin{equation}\label{exm5eq1} \frac{d}{dt}\Biggl[\frac{1}{t^{2}}\frac{d}{dt}\biggl[x(t)-e^{-\pi}x(t-\pi)\biggr]\Biggr]+4\cosh(\pi)t\bigl[e^{-\frac{\pi}{2}}(t+1)x(t-\tfrac{\pi}{2})+x(t-\pi)\bigr]=0\quad\text{for}\ t\geq2\pi, \end{equation}
(11)
where \(r(t):=\frac{1}{t^{2}}\), \(R(t):=\frac{1}{t}\), \(p(t):\equiv e^{-\pi}\), \(\tau:=\pi\), \(q(t):=4e^{-\frac{\pi}{2}}\cosh(\pi)t(t+1)\), \(\sigma_{1}=\tfrac{\pi}{2}\), \(G(u):=u\), \(v(t):=4\cosh(\pi)t\), \(\sigma_{2}:=\pi\) and \(H(u):=u\) for \(t\geq2\pi\) and \(u\in \mathbb{R}\). All the assumptions of Theorem 7 can be verified. In particular, for \((A11)\), we have
\begin{equation} \int_{2\pi}^{\infty}\frac{1}{\eta}\int_{2\pi}^{\eta}4\cosh(\pi)\zeta\frac{\varepsilon}{\zeta-\pi}d\zeta d\eta=\infty \quad\text{for any}\ \varepsilon>0.\notag \end{equation}
(12)
Hence, due to Theorem 7, every solution of Equation (11) oscillates, and such a solution is \(x(t)=e^{t}\sin(t)\) for \(t\geq2\pi\).

Acknowledgments

This work is supported by the Department of Science and Technology (DST), New Delhi, India, through the bank instruction order No. DST/INSPIRE Fellowship/2014/140, dated Sept. 15, 2014.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Baculíková, B., & Džurina, J. (2011). Oscillation theorems for second order neutral differential equations. Computers & Mathematics with Applications, 61(1), 94-99. [Google Scholor]
  2. Li, T., Rogovchenko, Y. V., & Zhang, C. (2013). Oscillation results for second-order nonlinear neutral differential equations. Advances in Difference Equations, 2013(1), 336. [Google Scholor]
  3. Santra, S. S. (2016). Existence of positive solution and new oscillation criteria for nonlinear first order neutral delay differential equations. Differential Equations & Applications, 8(1), 33-51. [Google Scholor]
  4. Tripathy, A. K., Panda, B., & Sethi, A. K. (2016). On oscillatory nonlinear second order neutral delay differential equations. Differential Equations & Applications, 8, 247-258.[Google Scholor]
  5. Hale, J. K., & Lunel, S. M. V. (2013). Introduction to functional differential equations (Vol. 99). Springer Science & Business Media. [Google Scholor]
  6. Kilbas, A. A. A., Srivastava, H. M., & Trujillo, J. J. (2006). Theory and applications of fractional differential equations (Vol. 204). Elsevier Science Limited. [Google Scholor]
  7. Miller, K. S., & Ross, B. (1993). An introduction to the fractional calculus and fractional differential equations. John Wiley and Sons, Inc., New York. [Google Scholor]
  8. Podlubny, I. (1998). Fractional differential equations: an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications (Vol. 198). Elsevier. [Google Scholor]
  9. Kilbas, A. A., Marichev, O. I., & Samko, S. G. (1993). Fractional integral and derivatives (theory and applications). Gordon and Breach Science Publisher, Yverdon. [Google Scholor]
  10. Baculíková,, B., Li, T., & Džurina, J. (2013). Oscillation theorems for second-order superlinear neutral differential equations. Mathematica Slovaca, 63(1), 123-134. [Google Scholor]
  11. Hasanbulli, M., & Rogovchenko, Y. V. (2010). Oscillation criteria for second order nonlinear neutral differential equations. Applied Mathematics and Computation, 215(12), 4392-4399. [Google Scholor]
  12. Li, T., & Rogovchenko, Y. V. (2014). Oscillation theorems for second-order nonlinear neutral delay differential equations,Abstract and Applied Analysis, 2014 (2014), Article ID 594190. [Google Scholor]
  13. Liu, Y., Zhang, J., & Yan, J. (2015). Existence of oscillatory solutions of second order delay differential equations. Journal of Computational and Applied Mathematics, 277, 17-22. [Google Scholor]
  14. Tamilvanan, S., Thandapani, E., & Dzurina, J. (2017). Oscillation of second order nonlinear differential equation with sublinear neutral term. Differential Equations & Applications, 9(1), 29-35. [Google Scholor]
  15. Yan, J. (2011). Existence of oscillatory solutions of forced second order delay differential equations. Applied Mathematics Letters, 24(8), 1455-1460. [Google Scholor]
  16. Hildebrandt, T. H. (1963). Introduction to the Theory of Integration. New York-London: Academic Press. [Google Scholor]
]]>
Evaluation of Markov chains to describe movements on tiling https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/evaluation-of-markov-chains-to-describe-movements-on-tiling/ Sat, 30 Nov 2019 13:27:53 +0000 https://old.pisrt.org/?p=3504
OMS-Vol. 3 (2019), Issue 1, pp. 358 – 381 Open Access Full-Text PDF
Meseyeki Saiguran, Arne Ring, Abdullahi Ibrahim
Abstract: This study investigate movements of molecule on the biological cell via the cell walls at any given time. Specifically, we examined the movement of a particle in tiling, i.e. in hexagonal and square tiling. The specific questions we posed includes (i) whether particles moves faster in hexagonal tiling or in square tiling (ii) whether the starting point of particles affect the movement toward attainment of stationary distribution. We employed the transitional probabilities and stationary distribution to derive expected passage time to state \(j\) from state \(i\), and the expected recurrence time to state \(i\) in both hexagonal and square tiling. We also employed aggregation of state symmetries to reduce the number of state spaces to overcome the problems (i.e. the difficulty to perform algebraic computation) associated with large transition matrix. This approach leads to formation of a new Markov chain \(X_t\) that retains the original Markov chains properties, i.e. by aggregation of states with the same stochastic behavior to the process. Graphical visualization for how fast the equilibrium is attained with different values of the probability parameter \(p\) in both tilings is also provided. Due to difficulties in obtaining some analytical results, numerical simulation were performed to obtains useful results like expected passage time and recurrence time.
]]>

Open Journal of Mathematical Sciences

Evaluation of Markov chains to describe movements on tiling

Meseyeki Saiguran\(^1\), Arne Ring, Abdullahi Ibrahim
Department of Mathematical Sciences, St. John’s University of Tanzania, Tanzania.; (M.S)
Department of Mathematics, University of the Free State, South Africa.; (A.R)
Department of Mathematical Sciences, Baze University Abuja, Nigeria.; (A.I)
\(^{1}\)Corresponding Author: messiaroine@gmail.com

Abstract

This study investigate movements of molecule on the biological cell via the cell walls at any given time. Specifically, we examined the movement of a particle in tiling, i.e. in hexagonal and square tiling. The specific questions we posed includes (i) whether particles moves faster in hexagonal tiling or in square tiling (ii) whether the starting point of particles affect the movement toward attainment of stationary distribution. We employed the transitional probabilities and stationary distribution to derive expected passage time to state \(j\) from state \(i\), and the expected recurrence time to state \(i\) in both hexagonal and square tiling. We also employed aggregation of state symmetries to reduce the number of state spaces to overcome the problems (i.e. the difficulty to perform algebraic computation) associated with large transition matrix. This approach leads to formation of a new Markov chain \(X_t\) that retains the original Markov chains properties, i.e. by aggregation of states with the same stochastic behavior to the process. Graphical visualization for how fast the equilibrium is attained with different values of the probability parameter \(p\) in both tilings is also provided. Due to difficulties in obtaining some analytical results, numerical simulation were performed to obtains useful results like expected passage time and recurrence time.

Keywords:

Markov Chains, hexagonal tiling, square tiling, symmetries, expected passage time, expected recurrence.

1. Introduction

When a molecule moves across the collection of biological cells via the cell wall, there are several possible cells to which it can randomly move to from the starting cell. The collection of these random possibilities is a stochastic process which constitute a Markov chains \(\{X_t , t > 0\}\) with a state space S that describe the possible values of the stochastic variables. We will consider movements of a particle on a tessellated \(2D\) plane specifically, hexagonal and square plane. When a particle moves in these planes from its initial cell to neighboring cell, there are different possible movements depending on whether the particle is moving on hexagonal tiling or square tiling. For a square tiling, there are four possible movements of a particle initially at the central cell (i.e. the north, south, west and east) however, for a hexagonal tiling the particle can move to either of six neighboring cells. We will construct transition matrices to describe the movement of the particle from the initial cell to the neighboring cells for both tilings. This will be used to derive the Stationary distribution of the process, the hitting time probabilities from state \(i\) to \(j\) at a certain step, the expected passage time of the particle from state \(i\) to \(j\), the recurrence time from state \(i\) to \(i\) in both tilings for a small cell complex. We will also examine the impact of different values of the probability parameter p on the movement rate of a particle toward the stationary distribution. To overcome the problems associated with performing algebraic and numerical computations with transition matrices of large size, we will define states with similar stochastic behavior to the Markov process. The states with same stochastic behavior by permutation or reordering are termed as symmetric or equivalent states.

For symmetric states, we will define special structure of the transition matrix which may be useful in reducing the state space of the transition matrix. Symmetries might be used to lump together equivalent states to reduce the number of equations to be solved for state probabilities of the given Markov process leading to minimization of the time and effort for both numerical and algebraic computations [1, 2]. The process of reducing the state space is known as aggregation of the Markov chain. This involves partitioning the process into subsets where each subset retains the original Markov process properties [3]. This aggregation results in a new Markov chain (aggregated chain) \(X_{t}^{'}\) with fewer number of states such that the finite probabilities of aggregated states equals the finite probabilities of the corresponding states of the initial Markov chain [4].

Over the years, algorithms have been developed aimed at computations of equilibrium distribution of large Markov chain. One of such algorithm is the Takahashi's iterative algorithm for computing the equilibrium distribution of the Markov chains in discrete or continuous time by alternatively solving an aggregated version or disaggregated version of the problem [5]. According to [6], if a Markov chain is strongly lumpable then the eigenvalues values of the transition matrix on the aggregated state space are all found in the transition matrix on the original state space [1]. Therefore, for an exact lumpable Markov process, the aggregation-disaggregation algorithm converge in one step [6]. A practical problem that arises in connection with practical application of the Markov chain models is to determine whether the Markov chain is lumpable [3]. For chains with large state space it is practically impossible to determine whether the conditions of lumpability are met or not. Hence, in such situations an alternative approach of eigenvectors are used for verifiction of whether the conditions are met [3]. The main objective of this study is to derive movements of molecules in biological cells via the cell walls and to describes the probability that a particle is in a particular state at some given points in time.

To derive this probabilities we introduce the parameter p which defines the probability for the movement of the particle from one cell to the neighboring cell. Different values of p either small or large helps to investigates the rate of movement of a particle to different cells and how fast the equilibrium can be reached in both hexagonal and square tilings. Markov chains (standard) restrict the probability distribution to only take into account the previous state. Higher order Markov chains relax this condition by taking into account \(n\) previous states, where \(n\) is a finite natural number [7]. Several authors have worked on Markov chain which can be found in [8, 9]. Also, recent works that discuss tiling and Markov chain can be seen in [10, 11, 12, 13].

The following are the questions we seek to investigate
  • (a) Determine whether different values of the probability parameter p affect the (i) speed of the movement of a particle from the starting cell to the neighboring cells. (ii) the attainment of the stationary distribution.
  • (b) Determine whether the starting point of the molecule under small cell complex alter the attainment of stationary distribution.
  • (c) Determine whether the tiling influence the attainment of equilibrium status?
In this work the following are the tentative answers (hypothesis) for the research questions that we are going to test (a) The smaller the value of \(p\) the (i) slower the movement of the particle and vice versa (ii) faster the equilibrium status will be attained. (b) The movement of the particle in hexagonal tiling is faster than in square tiling thereby, leading to a faster attainment of stationary distribution.
This rest of the paper is structured as follow, the basic concepts used is described in Section 2, the investigation of movements with discrete time under hexagonal and square small cell complexes in different approaches in Section 3, numerical simulation in Section 4, Section 5 cover the discussion of the investigations and Section 6 is the conclusion.

2. Preliminaries

2.1. Markov chains

The prerequisite of defining Markov chain is to gain an understanding of stochastic process and stochastic variable.

Definition 1. (Stochastic variable and process)
A stochastic variable is the one whose possible outcomes are the result of the random phenomenon. A stochastic process, \(X_t\) is a collection of stochastic variables that are indexed by parameters, for instance time. A state space, \(S\) is the finite set of possible values of random variables in discrete time. Consider the stochastic process \({X_t}, t\in T\) where \(t\) is time and \(T\) represent the time state space. If the conditional probability distribution of the present state of the process depends only on immediate past, then the stochastic process is said to have Markov property. Mathematically, we can define the Markov chain as:

\begin{equation} P(X_{t+1} = j \mid X_{t} = i, X_{t-1} = i-1, X_{t-2} = i-2, \cdots) = P(X_{t+1} = j \mid X_{t} = i) = P(i, j). \label{1} \end{equation}
(1)
where \(i\) and \(j\) represent the current and future states respectively. That is conditioning on the history of the process up to a time \(t\) is equivalent to conditioning only on the most recent state. Hence, the past is irrelevant for predicting the future given knowledge of the present. Now we define a stochastic matrix \(P\) as follows:
\begin{equation} P= \begin{pmatrix} P_{1,1}&P_{1,2}&\dots&P_{1,n}\\ P_{2,1}&P_{2,2}&\dots&P_{2,n}\\ \vdots&\vdots&\ddots&\vdots&\\ P_{i,1}&P_{i,2}&\dots&P_{i,n}&\\ \vdots&\vdots&\ddots&\vdots&\\ P_{n,1}&P_{n,2}&\dots&P_{n,n}\\ \end{pmatrix}, \quad \mbox{where} \quad \sum_{j=1}^{n}P_{i,j}=1. \label{2} \end{equation}
(2)
All stochastic processes defined in this study are homogeneous Markov chain, i.e. the transitional probabilities \(P_{i,j}\) are time independent. Let \(v=(X_0)\) be the starting vector at \(t=0\), then we define the Markov chain as \(M = (X_0, P^{t})\). We describe below some common properties of a Markov chain.

Definition 2. (Discrete time Markov chain)
A Markov chain is said to be discrete time if the state space of the possible outcomes of the process is finite. The number of steps in a discrete time Markov chain are finite. For example, when a fair coin is tossed up into the air , the set of possible outcome is head and tail. The number of trials of this experiment represent the index parameter (time) which is discrete. Similarly, the Markov chain defined in Equation \eqref{2} is a discrete time Markov chain, since the number of steps and state space are known and countable.

Definition 3. (Irreducible Markov chain)
A Markov chain is said to be irreducible if it possible to move from either of its given states to the other state. The state \(j\) is said to be accessible from state \(i\), if the probability of moving from state \(i\) to \(j\) is greater than \(0\). Mathematically, state \(j\) is accessible from state \(i\) if:

\begin{equation} P(X_{t}=j|X_{t-1}=i)=P_{ij}>0. \label{irreducible} \end{equation}
(3)
If state \(j\) is accessible from sate \(i\) and state \(i\) is also accessible from state \(j\), we say that the two states communicate \((i \leftrightarrow j)\). A stationary distribution \(\pi\) of the Markov chain, is the probability distribution that remain unchanged in a Markov chain as time progress. Mathematically, the stationary distribution is represented as:
\begin{equation} \pi =\pi P, \label{3} \end{equation}
(4)
where \(\pi\) represents a row vector whose entries are probabilities which sum to \(1\) and \(P\) is the transition matrix. That is \(\sum_{i=1}^{n}\pi_{i}=1.\)

Definition 4. (Periodicity of the Markov chain)
A state \(i\) is known as returning states if the probability \(P^{n}_{ii}>0\) for \(n>1\). Now the period \(d\) of the state \(i\) is defined as: $$ d = \gcd\{ n >0: \Pr(X_n = i \mid X_0 = i) > 0\}. $$ This means that, starting in \(i\) , the chain can return to \(i\) only at multiples of the period \(d\), and \(d\) is the largest such integer. State \(i\) is said to be aperiodic if \(d=1\, \forall n>0\).

Definition 5. (Recurrent state, transient state and ergodicity of Markov chain)
A state \(i\) is said to be recurrent (persistence) if starting from that state, there is a positive probability of returning to the same state. Recurrent time is the number of time required to return to the same state. A state \(i\) is said to be transient if starting from that state, there is a positive probability of not returning to the same state. Ergodicity is the positive recurrent aperiodic state of a Markov chain.

Definition 6. (Chapman-Kolmogorov Equation)
This is the equation that relates the joint probability distribution of different sets of coordinates in a stochastic process. Chapman-Kolmogorov equation is used to to deduce the \(n-\)steps transitional probabilities of the Markov chain. Let \(P^{m+n}_{ij}\) represents the transitional probability of a particle to be in state \(j\) from state \(i\) after \(n+m\) steps. By definition of \(P^{m+n}_{ij}\), we have \(P^{m+n}_{ij}= P(X_{n+m}=j |X_{0}=i)\). Let \(k\) be an intermediate step between state \(i\) and \(j\), then the probability of moving from state \(i\) to \(j\) in \(n-\)steps is the series sum of probabilities.
Suppose \({X_{n},n=0,1,2,\cdots}\) is the homogeneous Markov chain. Then,

\begin{equation} P^{m+n}_{ij}=\sum_{k\in S}P^{m}_{ik}P^{n}_{kj}. \label{4} \end{equation}
(5)

Definition 7. [Absorbing and non-absorbing states of a Markov process]
An absorbing state is the state for which once entered, cannot be left, i.e. \(P_{ii}=1\). The converse scenario is termed a non-absorbing states.

Theorem 8. [General Lumpability]
The Markov Chain \(M=(S,P,\pi)\) is lumpable with respect to the partition \(L=\{C_{1},C_{2},...C_{m}\}\) of \(S\) if and only if there exist a matrix \(\hat{P}\) of order \(m\), such that for all \(i, j \in {1,2,3,...m}\) and \(k \geq 0,\)

\begin{equation} \hat{P}^{k}_{ij}=\frac{\sum_{i'\in C_{i}}\pi (i')\sum_{j'\in C_{j}}P^{k}_{i'j'}}{\hat{\pi (i)}}. \label{1113} \end{equation}
(6)

Definition 9. [First passage time and Recurrence time]
The first passage time denoted by \(N_{ij}\) from \(i\) to \(j\), is the smallest positive integer \(n\) such that \(X_{n}=j\) when \(X_{0}=i\) [14]. If \(i=j\), then we define the recurrence time \(N_{ii}\) for state \(i\) [14].

2.2. Calculation of expected first passage time and recurrence time

The first passage time can simply be defined as the number of transitions required before the particle (molecule) moves to state \(j\) from state \(i\) for the first time, while recurrence time is the number of transition required before state \(i\) return to itself. Let \(T_{ij}\) be the expected passage time from state \(i\) to state \(j\), \(f_{ij}\) be the first passage time probability from state \(i\) to state \(j\) at step one (hitting time probability), \(f^{n}_{ij}\) is the first passage time probability from state \(i\) to \(j\) in \(n-\)steps . Let \(P_{ik}\) be the probability of the movement of the molecule from state \(i\) to intermediate states \(k\). If \(f_{ij}< 1\), then it is possible for the particle to move from state \(i\) to state \(j\) infinity time, so here we define the the expected first passage time to be infinity, i.e. \(T_{ij}=\infty\). If \(f_{ij}=1\), means that the molecule can move from state \(i\) to \(j\) in finite number of steps. Equation below define the probability that the first passage time from state \(i\) to \(j\) is \(n\).
\begin{equation} f^{n}_{ij}=P(T_{ij}=n)=\sum_{k=1,k\neq j}^{m}P_{ik}f^{n-1}_{kj}. \label{6} \end{equation}
(7)
In Equation (7), \(f^{n}_{ij}\) stand as a preamble for calculation of the expected passage time from state \(i\) to \(j\) denoted by \(T_{ij}\). \(f^{n}_{ij}\) and \(T_{ij}\) are very important because they describes how fast a particle moves in a cell complex. If \(n=1\), the particle just move one step, therefore it will just be in state \(j\) without passing intermediate stop \(k\). With this absence of intermediate steps, the first passage probability from state \(i\) to \(j\) \(f^{1}_{ij}\) will be the probability of the particle to move from state \(i\) to state \(j\) denoted by \(P_{ij}\). In the presence of an intermediate steps \(k\), Equation (7) is valid for \(n \geq2\). The expected first passage time can be defined as:
\begin{equation} T_{ij}=E[f^{n}_{ij}]=\sum_{n=1}^{\infty}nf^{n}_{ij} \label{7} \end{equation}
(8)
Generally, if a particle move from state \(i\) to state \(j\) via intermediate state \(k\), then Equation (8) will be written as:
\begin{equation} T_{ij}=1+\sum_{k=1,k\neq j}^{m}P_{ik}T_{kj}. \label{8} \end{equation}
(9)
Using Equation (9), we will be able to determine the required transitions from state \(i\) to state \(j\) by solving a system of algebraic equations.

2.3. Tiling

In mathematics, tiling is the arrangement or tessellation of the flat surface of the plane using some geometrical shapes like square, hexagon, circle, pentagon and triangles with non-overlapping and no gap between them. According to [10], a tile is a connected subset of \(\mathbb{R}^2\). In this study we will adapt a a two dimensional square and hexagonal tilling as shown in Figure 1.

Figure 1. Hexagonal tiling

3. Investigation of the movement with discrete time in two small cell complexes

Given a two-dimensional tessellated surface, we seek to investigate the movements of a molecule to neighboring cells via the cell walls from a given starting point. We will discuss the movement of the molecule with starting point at different cells in a \(2-D\) cell complexes specifically, square and hexagonal tiles.

3.1. Hexagonal tiling

Hexagonal tilling is the geometrical shape with six sides. Modeling the movement of the molecule on a hexagonal tiles requires a knowledge of the possible movements of the particle from the starting point. There are two possible starting points on a hexagonal tilling namely, from the center and at any of the border cell. When a molecule starts from the central cell, there are six possible movements via the cell wall. The probability \(p\) of the molecule to moves to a neighboring cell remains same as the same as the probability of throwing a fair die once. However, when a molecule starts from any the border cell, the there are only three possible movements at step one.

For small cell complex, the states are non-absorbing since the molecule cannot return to its starting point. The movements in a small cell complex is also considered irreducible Markov chain because it is possible for the biological molecule to move from one state to another. Let \(S={1,2,3,4,5,...m}\) represents the set of state space and \(n\) represent the number of steps the molecule takes to moves via the cell walls from the starting cell. At the rest \((n=0)\), the particle is at its starting state hence, the probability of the particle toward the neighboring cells is zero, i.e. \(p=0\). At the first step \((n=1)\), the particles moves to one of the neighboring tiles with probability \(p\). The Figure 2 portray the structure of the hexagonal tiling for the movement of the particle at \(n=1\) with the initial cell at the center, that is cell \(1\).

Figure 2. Hexagonal tiling under small cell complex

From Figure 2 we are going to construct a transition matrix with \(7\) states as follows:
The probability of the particle to move from one cell to the nearest cell is \(p\). The total sum of probability to move from one cell to the neighbouring cells at step \(1\) is \(1\). Therefore, fore instance when the particle is initially at the central cell, then the probability of the particle to be in the same cell at step one is \(1-6p\) since the total probability add to one. If the condition of symmetries exist, then \(P(i,j)=P(j,i)\). Now let \(A\) be the transition matrix of this behavior. So matrix \(A\) is as follow:
\begin{gather} A= \begin{bmatrix} 1-6p & p & p & p & p & p & p & \\ p& 1-3p & p & 0 & 0 & 0 & p &\\ p & p & 1-3p & p & 0 & 0 & 0 & \\ p & 0 & p & 1-3p & p & 0 & 0 & \\ p & 0 & 0 & p & 1-3p & p & 0 & \\ p& 0 & 0 & 0 & p & 1-3p & p &\\ p & p & 0 & 0 & 0 & p & 1-3p & \\ \end{bmatrix} \label{9} \end{gather}
(10)

3.2. Equilibrium status of the Hexagonal small cell complex

To derive the equilibrium status of the cell complex, we first require the notion of stationary distribution. Using the transition matrix \(A\) given in Equation (10) to Equation (4), we have the following system of equations:
\begin{equation} \left\{ \begin{array}{c} \pi_iA=\pi_i\\ (\pi_{1}, \pi_{2}, \pi_{3}, \pi_{4}, \pi_{5}, \pi_{6}, \pi_{7})A=(\pi_{1}, \pi_{2}, \pi_{3}, \pi_{4}, \pi_{5}, \pi_{6}, \pi_{7})\\ \end{array} \right. \label{10} \end{equation}
(11)
By using Equation (10) into Equation (11) with further algebraic manipulation when \( p \neq 0 \), lead to
\begin{equation} \left\{ \begin{array}{c} (-6)\,\pi_{1} + \pi_{2} + \pi_{3} + \pi_{4} + \pi_{5} + \pi_{6} + \pi_{7} =0 \\ \pi_{1} - (3)\,\pi_{2}\, +\pi_{3}\, + \pi_{7}\, =0 \\ \pi_{1} + \pi_{2}\, -3\,\pi_{3}\, + \pi_{4}\, =0\\ \pi_{1} + \pi_{3}\, -3\,\pi_{4} + \pi_{5} =0\\ \pi_{1} + \pi_{4}\, -3\,\pi_{5}\, + \pi_{6} =0\\ \pi_{1} + \pi_{5}\, -3\,\pi_{6} + \pi_{7} =0\\ \pi_{1} + \pi_{2} + \pi_{6}\, -3\,\pi_{7} =0\\ \end{array} \right. \label{13} \end{equation}
(12)
To solve the system (12), we append the equation for the normalization of the vector \(\pi\), as:
\begin{equation} \pi _{1}+\pi _{2}+\pi _{3}+\pi _{4}+\pi _{5}+\pi _{6}+\pi _{7}=1. \label{14} \end{equation}
(13)
Solving the system (12) gives the stationary distribution of hexagonal cell complex as:
\begin{equation} (\pi_{1},\pi_{2},\pi_{3},\pi_{4},\pi_{5},\pi_{6},\pi_{7})=(1/7,1/7,1/7,1/7,1/7,1/7,1/7). \label{16} \end{equation}
(14)
Consider the following mini-theorem:

Theorem 10. A Markov chain with a symmetric transition matrix P has a uniform stationary distribution \(1/n\), where n is the number of states.

3.3. Calculation of Expected First passage time and Recurrence time in Hexagonal cell complex

Let \(T_{j1}\) be the time of hitting state \(1\) while starting from state \(j\), \(P(T_{j1}=n)\) for any \(j>1\) be the probability of the particle to hits state \(1\) from state \(j>1\) in \(n-\)steps and \(P_{ij}\) be the entries of the transition matrix \(A\) as shown in Equation (10). We are going to derive \(P(T_{j1}=n)\) with different values of \(n\) and \(j>1\). From Equation (7), we have:
\begin{equation} f^{n}_{ji}=P(T_{ji}=n)=\sum_{k=1,k\neq i}^{m}P_{jk}f^{n-1}_{ki}=\sum_{k=1,k\neq i}^{m}P_{jk}P(T_{ki}=n-1). \label{18} \end{equation}
(15)
For \(i=1\) and \(j>1\), the Equation (15) becomes:
\begin{equation} f^{n}_{j1}=P(T_{j1}=n)=\sum_{k=2}^{m}P_{jk}f^{n-1}_{k2}=\sum_{k=2}^{m}P_{jk}P(T_{k1}=n-1). \label{19} \end{equation}
(16)
Because of the symmetries in the small cell complex, we are going to derive only \( P(T_{21} = n)\), because \(P(T_{j1} = n) =P(T_{21} = n)\) for all \(j>=2\). Which means that the number of possible moves at step one when the particle is initially at one of the border cells is the same.
From Equation (16), when \(k={1,2,3\cdots m}\), we have the following equation:
\begin{equation} P(T_{j1}=n)= P_{j2}P(T_{21}=n-1)+P_{j3}P(T_{31}=n-1)+ \dots +P_{j7}P(T_{71}=n-1). \label{20} \end{equation}
(17)
At the first step; \((n=1)\) we have that:
\begin{equation} P(T_{j1}=1)=f^{1}_{j1}=P_{j1}. \label{21} \end{equation}
(18)
Using \(j=2\) in Equation (18), the probability of the particle to be in state \(1\) from state \(2\) at the first step is:
\begin{equation} P(T_{21}=1)(p)= P_{21}=p. \label{23} \end{equation}
(?)
Equation (19) implies that the probability of the particle to be in state \(1\) from state \(2\) at step one depends the value of the probability parameter \(p\).
Figure 3 shows the movement of the molecule from state \(2\) to \(1\) in one step, \(n=1\). An arrow means the molecule move from state \(2\) to \(1\) with probability \(p\). The same applies to state \(3\), \(4\), \(5\), \(6\) and \(7\) to state \(1\).

Figure 3. Hexagonal tiling in small complex to show the movement of the particle from state \(2\) to \(1\) at step one.

Generally at any step \(n\), \(P(T_{j1}=n)\) for \(j>1\) and \(n\in \mathbb{N}\) is given by the following equation:
\begin{equation} P(T_{j1}=n)= p(1-p)^{n-1}, \label{34} \end{equation}
(20)
One can easily proof Equation (20) by Mathematical induction. Equation (20) is very useful in calculating the expected passage time \(T_{ji}\) from state \(j\) to \(i\) with the help of Equation (8) as follows: From Equation (8), we deduce that
\begin{equation} T_{j1}=\sum_{n=1}^{\infty}nP(T_{j1}=n). \label{46} \end{equation}
(21)
Using Equation (20) into Equation (21) gives:
\begin{equation} T_{j1}=\sum_{n=1}^{\infty}np(1-p)^{n-1}. \label{47} \end{equation}
(22)
From the concept of derivatives, we have that if \(f(x)\) is a function defined by \(f(x)=x^{n}\) then derivative of \(f(x)\) is defined as \(\frac{d(f(x))}{dx}=nx^{n-1}\). Using this idea to Equation (22) lead to:
\begin{equation} T_{j1}=1/p. \label{54} \end{equation}
(23)
Equation (23) is true for \(j=(2,4,5,6,7)\).
If \(i=j\), then we define the expected recurrence time \(T^{*}_{11}\) as
\begin{equation} T^{*}_{11}=1+\sum_{k=2}^{7}P_{1k}T_{k1}. \label{56} \end{equation}
(24)
Simplification of Equation (24) gives:
\begin{equation} T^{*}_{11}=1+P_{12}\,T_{21}+P_{13}\,T_{31}+P_{14}\,T_{41}+P_{15}\,T_{51}+P_{16}\,T_{61}+P_{17}\,T_{71}. \label{57} \end{equation}
(25)
By using \(P_{ij}\) from Equation (10) and \(T_{j1}\) for \(j=(2,3,4,5,6,7)\) from Equation (23) into Equation (25) gives:
\begin{equation} T^{*}_{11}=1+p\times (1/p)+p\times (1/p)+p\times (1/p)+p\times (1/p)+p\times (1/p)+p\times (1/p). \label{58} \end{equation}
(26)
Therefore the expected recurrence time to state \(1\) \((T_{11})\) is:
\begin{equation} T^{*}_{11}(p)=7. \label{59} \end{equation}
(27)
From Equation (24), we observe that the expected recurrence time to state \(1\) is \(7\). This means that the expected recurrence time \(T^{*}_{11}\) does not depend on the value of the probability parameter \(p\). Whether \(p\) is large or small, the expected recurrence time remain the same. This because of the fact that the Markov chains defined under hexagonal tiling is the finite Markov chain with single recurrence class. In this case we define a unique stationary distribution \(\pi_{i} =\frac{1}{\mu_{ii}}\), where \(\mu_{ii}\) is the expected recurrence time to state \(i\). Since the stationary distribution is unique it imply that the expected recurrence time is also unique.

3.4. Visualization of the transitional probabilities of transition matrix against time

In this part we are going to plot graphs of transitional probabilities against time with different values of the probability parameter \(p\) in transition matrix \(A\).
The following figures depict graphs of transitional probabilities against time when the particle is initially at the central cell with probability vector \(P^{0}_{1}=(1,0,0,0,0,0,0)\).

Figure 4. The panel of graphs to represent transitional probabilities against time interval of a transition matrix \(A\) with starting point at the central cell with with initial probability vector \(P^{0}_{1}=(1,0,0,0,0,0,0)\). From these figures state \(2\), \(3\), \(4\), \(5\), \(6\) and \(7\) overlap because they have the same stochastic behavior to the process. 

From Figure 4 we saw how different values of the probability parameter \(p\) affect the attainment of the equilibrium status of the Markov process when the molecule is initially at the central cell.
3.4.1. When a particle is initially at one of the border cells
Now we are going to see how will the equilibrium be attained when the molecule is initially at one of the border cells with the initial probability vector \(P^{0}_{2}=(0,1,0,0,0,0,0)\) with different values of the probability parameter \(p\) as in Figure 4.

Figure 5. The panel of graphs to represent transitional probabilities against time interval of a transition matrix \(A\) with starting point at one of the border cells with with initial probability vector \(P^{0}_{2}=(0,1,0,0,0,0,0)\). With these graphs state \(3\) and \(7\) form one state while state \(4\) and \(6\) form a new state by overlapping because they have the same stochastic behavior to the process.

With Figure 4 and Figure 5, we deduce that when the molecule is initially at the central cell, the number of steps the process take to be in equilibrium is fewer than the number of steps the process take to attain equilibrium when the molecule is initially at one of the border cell.

Another way to investigate the fastness of the process toward equilibrium graphically is by using the log scale as shown in figures below.

Figure 6. A panel of graphs to show how fast the equilibrium is attained in log scale under hexagonal tiling when the particle is initially at the central cell with probability vector \(v=(1,0,0,0,0,0,0)\). 

Figure 6 show how the equilibrium is attained when the particle is initially at the central cell. Figure 6(b) show the difference between the process and the equilibrium distribution. And lastly Figure 6(c) depict how the steepness of the slope influence the attainment of stationary distribution.
3.4.2. The use of log scale when the probability parameter \(p=\frac{1}{7}\)
We are to see how the value of the probability parameter \(p\) affect the slope in log scale toward attainment of equilibrium distribution.
Figure 7 is the continuation of Figure 6 but now with the probability parameter \(p=\frac{1}{7}\). The only different between Figure 7 and 6 is the time the process takes to attain equilibrium. The slope of Figure 7(c) is slight gently compared to the slope of Figure 6(c) which is much steeper indicate the fastness of the particle. In both cases the particle is initially at the central cell.

Figure 7. A panel of graphs to show how fast the equilibrium is attained in log scale under hexagonal tiling when the particle is initially at the central with probability vector \(v=(1,0,0,0,0,0,0)\).

Generally, we observe that the position of the particle initially and the value probability parameter \(p\) have direct consequences on the movement of the particle under hexagonal small cell complex as seen in Figures 6 and 7.

3.5. Square tiling

A square tile is a geometrical shape with four sides. Under this tilling, the possible movements of a biological cell depends on the location of the starting cell. When the starting cell is the central cell, then the biological molecules has has four possible movements away from the starting cell, which are North, South, East and East. Secondly, when the starting cell is one of the border cells at the corner then there are only two possible movements at step one; that is left/right and downward/upward movements. Thirdly, if the starting cell is one of the cells located at either North, South, East or West of the central cell, then there are only three possible movements of the molecule away from cell one; which are left/right, upward and downward movement. Let \(S={1,2,3,4,5,...m}\) be the set of state space, and \(n\) represent the number of steps the biological cell takes to moves via the cell walls from the initial point. At \(n=0\), it means that no movement occurred, so the particle is still at the initial cell. So the probability of the movement to either of the neighboring cells is \(p=0\). Let \((X_{t}, t=0,1,2,3,.....n)\) be the stochastic process with Markov property, and let \(i\) and \(j\) be the current and next states respectively. At \(n=1\), the particle moves to one of the neighboring cells with probability \(p\). The figure below delineate the structure of the square tiling at \(n=1\).

Figure 8. Square tiling in small complex with starting point at the central cell

From Figure 8 we will construct a transition matrix with \(9\) states such that if the condition of symmetries exist, then \(P(i,j)=P(j,i)\). At step \(1\), when the particle is initially at the central cell, then it can only moves to cell \(2\), \(4\), \(6\) and \(8\) with probability \(p\) to each cell, then the probability of the particle to be in cell one at step \(1\) is \(1-4p\) since the total probability add to \(1\). Now let \(P\) be the transition matrix of this behavior with initial vector \(P^{0}_{1}=(1,0,0,0,0,0,0,0,0)\). So matrix \(P\) is as follow:
\begin{gather} P= \begin{bmatrix} 1-4p & p & 0 & p & 0 & p & 0 & p & 0 & \\ p& 1-3p & p & 0 & 0 & 0 & 0 & 0 & p &\\ 0 & p & 1-2p & p & 0 & 0 & 0 & 0 & 0 & \\ p & 0 & p & 1-3p & p & 0 & 0 & 0 & 0 & \\ 0 & 0 & 0 & p & 1-2p & p & 0 & 0 & 0 & \\ p& 0 & 0 & 0 & p & 1-3p & p & 0 & 0 &\\ 0 & 0 & 0 & 0 & 0 & p & 1-2p & p & 0 & \\ p & 0 & 0 & 0 & 0 & 0 & p & 1-3p & p & \\ 0 & p & 0 & 0 & 0 & 0 & 0 & p & 1-2p & \\ \end{bmatrix} \label{60} \end{gather}
(28)
Since it possible to move from one state to another, then the transition matrix \(P\) defined in Equation (28) characterizes the irreducible Markov chain. state \(1\), \(2\), \(3\), \(4\), \(5\), \(6\), \(7\), \(8\) and \(9\) forms a closed recurrent class. The Markov chain defined in Equation (28) define aperiodic Markov chain. all states are non absorbing. Since transition matrix \(P\) define irreducible Markov chain, then it is also ergodic.

3.6. Equilibrium status of the square small cell complex

To determine the equilibrium status of the square small cell complex, we need to apprehend the worth of stationary distribution.
Since transition matrix \(P\) is the symmetric matrix, we are are going to apply the assertion of Theorem 10 to derive the equilibrium status of the square small cell complex. Theorem 10 argue that if \(P\) is the symmetric transition matrix then the stationary distribution is defined by \(1/n\), where \(n\) is the number of states. Therefore the stationary distribution is defined by the following equations:
\begin{equation} \left\{ \begin{array}{c} (\pi_{1},\pi_{2},\pi_{3},\pi_{4},\pi_{5},\pi_{6},\pi_{7},\pi_{8},\pi_{9})= (1/9,1/9,1/9,1/9,1/9,1/9,1/9,1/9,1/9).\\ \end{array} \right. \label{61} \end{equation}
(29)

3.7. Calculation of expected first passage time and recurrence time in square cell complex

As in hexagonal small cell complex, we are going to derive hitting time probabilities and expected passage time from one state to another in square cell complex.
Let \(T_{j1}\) be the time of hitting state \(1\) while starting from state \(j\), \(P(T_{j1}=n)\) for any \(j>1\) be the probability of the particle to hits state \(1\) from state \(j>1\) in \(n-\)steps and \(P_{ij}\) be the entries of the transition matrix \(P\) as shown in Equation (28).
We are going to derive \(P(T_{j1}=n)\) with different values of \(n\) and \(j>1\). Now using the idea of Equation (7), we have
\begin{equation} f^{n}_{ji}=P(T_{ji}=n)=\sum_{k=1,k\neq i}^{m}P_{jk}f^{n-1}_{ki}=\sum_{k=1,k\neq i}^{m}P_{jk}P(T_{ki}=n-1). \label{62} \end{equation}
(30)
Now for \(i=1\) and \(j>1\), Equation (30) becomes:
\begin{equation} f^{n}_{j1}=P(T_{j1}=n)=\sum_{k=2}^{m}P_{jk}f^{n-1}_{k1}=\sum_{k=2}^{m}P_{jk}P(T_{k1}=n-1). \label{63} \end{equation}
(31)
From Equation (31), when \(k={1,2,3...9}\) with respect to the square tilling; we have the following equation:
\begin{equation} P(T_{j1}=n)= P_{j2}\,P(T_{21}=n-1)\dots+P_{j8}\,P(T_{78}=n-1)+P_{j9}\,P(T_{91}=n-1). \label{64} \end{equation}
(32)
At the first step, that is at \((n=1)\), we have
\begin{equation} P(T_{j1}=1)=f^{1}_{j1}=P_{j1} \label{65} \end{equation}
(33)
further simplification of Equation (33) lead to:
\begin{equation} P(T_{j1}=1)= P_{j1}. \label{66} \end{equation}
(34)
Because of the symmetries in the small cell complex, we derive only \(P(T_{21} = n)\) and \( P(T_{51} = n)\), because \(P(T_{j1} = n) =P(T_{21} = n)\) for \(j=(4,6,8)\) and \(P(T_{j1} = n) =P(T_{51} = n)\) for \(j=(3,7,9)\), because of having the same stochastic behavior to the Markov process at step one. Using \(j=(2,5)\) into Equation (34) with \(P_{ij}\) from Equation (28), we have
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=1)(p)= P_{21}=p\\ P(T_{51}=1)(p)= P_{51}=0\\ \end{array} \right. \label{67} \end{equation}
(35)
Figure 9 shows the movement of the molecule in a square cell complex from state \(j>1\) to state \(1\). The red bullet (dots) represent the zero probability of the movement of the particle from the respective states with red dots to state \(1\) at step one. The arrow lines represents the probability \(p\) of the molecule to move from the respective particle to state \(1\) at \(n=1\). There is only one path to the central cell at step one.

Figure 9. Square tiling in small cell complex to depict the movement of the particle from state \(j>1\) to state \(1\) in one step.

For \((n=2)\), Equation (27) lead to:
\begin{equation} P(T_{j1}=2)= P_{j2}\,P(T_{21}=1)+P_{j3}\,P(T_{31}=1)+\dots +P_{j8}\,P(T_{81}=1)+P_{j9}\,P(T_{91}=1). \label{68} \end{equation}
(36)
For \(j=(2,5)\), Equation (36) gives:
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=2)=P_{22}\,P(T_{21}=1)+P_{23}\,P(T_{31}=1)+\dots+P_{28}\,P(T_{81}=1)+P_{29}\,P(T_{91}=1)\\ P(T_{51}=2)=P_{52}\,P(T_{21}=1)+P_{53}\,P(T_{31}=1)+\dots+P_{58}\,P(T_{81}=1)+P_{59}\,P(T_{91}=1)\\ \end{array} \right. \label{69} \end{equation}
(37)
Substituting Equation (\ref{67}) into Equation (\ref{69}), we have
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=2)= p\,P_{22}+p\,P_{24}+p\,P_{26}+p\,P_{28}\\ P(T_{51}=2)= p\,P_{52}+p\,P_{54}+p\,P_{56}+p\,P_{58}\\ \end{array} \right. \label{70} \end{equation}
(38)
Using \(P_{ij}\) from Equation (28) into Equation (38), With the fact that \(P_{24}=P_{26}=P_{52}=P_{28}=P_{58}=0\) because at one step there is no transition to the respective states, gives:
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=2)(p)= p\,(1-3p)\\ P(T_{51}=2)(p)= 2p^{2}\\ \end{array} \right. \label{72} \end{equation}
(39)
Figure 10 delineate the movement of the molecule from state \(j>1\) to state \(1\) at \(n=2\). When the molecule at step \(1\) is at either state \(2\), \(4\), \(6\) or \(8\), then it will move to state \(1\) means that at step \(2\) there will be no movement of the particle from the respective states to state \(1\). Conversely, when the particle is at either \(3\), \(5\), \(7\) or \(9\), then at step \(1\) the molecule will not move to state \(1\). However, at step \(2\) the molecule will move to state \(1\) via the neighboring states.

Figure 10. Square tiling in small complex two depict the movement of the particle from state \(j>1\) to state \(1\) in two steps.

For \((n=3)\), Equation (32) reduces to:
\begin{equation} P(T_{j1}=3)=P_{j2}\,P(T_{21}=2)+P_{j3}\,P(T_{31}=2)+\dots+P_{j8}\,P(T_{81}=2)+P_{j9}\,P(T_{91}=2) \label{73} \end{equation}
(40)
By using \(j=(2,5)\) to Equation (40), we have
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=3)= P_{22}\,P(T_{21}=2)+\dots+P_{28}\,P(T_{81}=2)+P_{29}\,P(T_{91}=2)\\ P(T_{51}=3)= P_{52}\,P(T_{21}=2)+\dots+P_{58}\,P(T_{81}=2)+P_{59}\,P(T_{91}=2)\\ \end{array} \right. \label{74} \end{equation}
(41)
Using \(P_{ij}\) from Equation (28) together with the results from Equation (39) into Equation (41), we have:
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=3)= (1-3p)\times(P(T_{21}=2))+p\times P(T_{51}=2)+p\times P(T_{91}=2)\\ P(T_{51}=3)= p\times(P(T_{41}=2))+(1-2p)\,(P(T_{51}=2))+p\times(P(T_{61}=2))\\ \end{array} \right. \label{75} \end{equation}
(42)
Using the idea of symmetries, we have that \(P(T_{41}=2)=P(T_{61}=2)=P(T_{21}=2)\) and \(P(T_{91}=2)=P(T_{51}=2)\), hence Equation (42) gives:
\begin{equation} \left\{ \begin{array}{c} P(T_{21}=3)(p)= p(13p^{2}-6p+1)\\ P(T_{51}=3)(p)= 2p^{2}(2-5p)\\ \end{array} \right. \label{78} \end{equation}
(43)
Figure 11 render the movement of the biological molecule from sate \(j>1\) to state \(1\) in three steps. In this figure we will show the movements of the particle from state \(j>1\) to state \(1\) in two circumstances with \(n=3\). First, when the molecule is at one of the corner cells and second, when the molecule is at one of the up/down/left/right cells.

Figure 11. square tiling in small complex two depict the movement of the particle from state \(j>1\) to state \(1\) in three steps.

When the molecule is initially at one of the corner cells, then at step one the molecule will not move to state \(1\) but it can move to either of the neighboring cells. In this context, the molecule will be in state \(1\) after two steps. Contrary, when the molecule is initially at one of the sides cell other the corner cells, then at step \(1\) it is possible for the molecule to move to state one directly. It can first move to one of the other neighboring cells then at \(n=2\) it can either moves to the neighbor cell of the current or move back to initial cell and finally at \(n=3\) it moves to state \(1\).
3.7.1. Calculation of the expected passage time under small cell complex in square tiling
To calculate the first passage time \(T_{j1}\), we follow the following procedures with reference to Equation (9) and transition matrix in Equation (28). The expected passage time from state \(j>1\) to state \(1\) is given by:
\begin{equation} T_{j1}=1+\sum_{k=2}^{m}P_{jk}T_{k1}. \label{79} \end{equation}
(44)
For \(m=9\), Equation (44) gives:
\begin{equation} T_{j1}=1+P_{j2}\,T_{21}+P_{j3}\,T_{31}+P_{j4}\,T_{41}+P_{j5}\,T_{51}+P_{j6}\,T_{61}+P_{j7}\,T_{71}+P_{j8}\,T_{81}+P_{j9}\,T_{91}. \label{80} \end{equation}
(45)
Setting \(j=(2,3,4,5,6,7,8,9)\) to Equation (45) lead to:
\begin{equation} \left\{ \begin{array}{c} T_{21}=1+P_{22}\,T_{21}+P_{23}\,T_{31}+P_{24}\,T_{41}+P_{25}\,T_{51}+P_{26}\,T_{61}+P_{27}\,T_{71}+P_{28}\,T_{81}+P_{29}\,T_{91}\\ T_{31}=1+P_{32}\,T_{21}+P_{33}\,T_{31}+P_{34}\,T_{41}+P_{35}\,T_{51}+P_{36}\,T_{61}+P_{37}\,T_{71}+P_{38}\,T_{81}+P_{39}\,T_{91}\\ T_{41}=1+P_{42}\,T_{21}+P_{43}\,T_{31}+P_{44}\,T_{41}+P_{45}\,T_{51}+P_{46}\,T_{61}+P_{47}\,T_{71}+P_{48}\,T_{81}+P_{49}\,T_{91}\\ T_{51}=1+P_{52}\,T_{21}+P_{53}\,T_{31}+P_{54}\,T_{41}+P_{55}\,T_{51}+P_{56}\,T_{61}+P_{57}\,T_{71}+P_{58}\,T_{81}+P_{59}\,T_{91}\\ T_{61}=1+P_{62}\,T_{21}+P_{63}\,T_{31}+P_{64}\,T_{41}+P_{65}\,T_{51}+P_{66}\,T_{61}+P_{67}\,T_{71}+P_{68}\,T_{81}+P_{69}\,T_{91}\\ T_{71}=1+P_{72}\,T_{21}+P_{73}\,T_{31}+P_{74}\,T_{41}+P_{75}\,T_{51}+P_{76}\,T_{61}+P_{77}\,T_{71}+P_{78}\,T_{81}+P_{79}\,T_{91}\\ T_{81}=1+P_{82}\,T_{21}+P_{83}\,T_{31}+P_{84}\,T_{41}+P_{85}\,T_{51}+P_{86}\,T_{61}+P_{87}\,T_{71}+P_{88}\,T_{81}+P_{89}\,T_{91}\\ T_{91}=1+P_{92}\,T_{21}+P_{93}\,T_{31}+P_{94}\,T_{41}+P_{95}\,T_{51}+P_{96}\,T_{61}+P_{97}\,T_{71}+P_{98}\,T_{81}+P_{99}\,T_{91}\\ \end{array} \right. \label{81} \end{equation}
(46)
Rearrangement of Equation (46) with probabilities \(P_{ij}\) from Equation (28) and \(p\ne0\) lead to:
\begin{equation} \left\{ \begin{array}{c} T_{21}(p)=2/p\\ T_{31}(p)=2.5/p\\ T_{41}(p)=2/p\\ T_{51}(p)=2.5/p\\ T_{61}(p)=2/p\\ T_{71}(p)=2.5/p\\ T_{81}(p)=2/p\\ T_{91}(p)=2.5/p\\ \end{array} \right. \label{86} \end{equation}
(47)
3.7.2. Calculation of recurrence time under small cell complex of the square tiling
If \(i=j\), then we define the expected recurrence time \(T^{*}_{11}\) as
\begin{equation} T^{*}_{11}=1+\sum_{k=2}^{m}P_{1k}T_{k1}. \label{87} \end{equation}
(48)
Simplification of Equation (48) lead to:
\begin{equation} T^{*}_{11}=1+P_{12}\,T_{21}+P_{13}\,T_{31}+P_{14}\,T_{41}+P_{15}\,T_{51}+P_{16}\,T_{61}+P_{17}\,T_{71}+P_{18}\,T_{81}+P_{19}\,T_{91}. \label{88} \end{equation}
(49)
By using probabilities \(P_{ij}\) from Equation (28) and \(T_{j1}\) for \(j=(2,3,4,5,6,7,8,9)\) from Equation (47) into Equation (49) gives:
\begin{equation} T^{*}_{11}(p)=9. \label{90} \end{equation}
(50)
From the result of Equation (50), we observe that the expected recurrence time to state \(1\) is \(9\). This means that the expected recurrence time \(T^{*}_{11}\) does not depend on the value of the probability parameter \(p\).

3.8. Visualization transitional probabilities against time interval of transition matrix \(P\) under square small cell complex

Under square small cell complex we have three different types of non equivalents cell which are:
  1. Cell complex with starting point at the central cell, with initial vector \(P^{0}_1 = (1, 0, 0, 0, 0,0, 0, 0, 0)\).
  2. Cell complex with starting point at the corner border cell,with initial vector \(P^{0}_3 = (0, 1, 0, 0, 0,0, 0, 0, 0)\).
  3. Cell complex with the starting point at the sides cells, that is North-South and South-East of the central cell,with initial vector \(P^{0}_3 = (0, 0, 1, 0, 0,0, 0, 0, 0)\).

We will show different graphs of transitional probabilities against time with different values of the probability parameter \(p\) in transition matrix \(P\). Figure 12 shows the transitional probabilities against time when the particle is initially at the central cell with probability vector \(P^{0}_{1}=(1,0,0,0,0,0,0)\).

Figure 12. The panel of graphs to represent transitional probabilities against time interval of a transition matrix \(P\) with starting point at the central cell with with initial probability vector \(P^{0}_{1}=(1,0,0,0,0,0,0,0,0)\). From these figures state \(2\), \(4\), \(6\), and \(8\) overlap to form a new state while state \(3\), \(5\), \(7\) and \(9\) overlap because they have the same stochastic behavior to the process.

In Figure 12 we observed the behavior of the process when the particle is initially at the central cell with different values of the probability parameter \(p\). We will now explore how different values of the probability parameter alter the process when the molecule is initially at one of the corner border cells. When the particle is initially at one of the corner border cells with probability vector \(P^{0}_2 = (0, 1, 0, 0, 0,0, 0, 0, 0)\), we have the following figures.

Figure 13. The panel of graphs to represent transitional probabilities against time interval of a transition matrix \(P\) with starting point at the one of the border corner cell with with initial probability vector \(P^{0}_{2}=(0,1,0,0,0,0,0,0,0)\). From these figures state \(3\) and \(9\), \(4\)and \(8\), \(5\) and \(7\) overlaps together to form new states respectively because they have the same stochastic behavior to the process.

Another non equivalent cell in square cell complex is when the particle is initially at one of the side cells, that is North-South and South-East of the central cell with initial probability vector \(P^{0}_{3}=(0,0,1,0,0,0,0,0,0)\).

From Figure 14 below, we observe that the equilibrium is attained faster with larger values of the probability parameter \(p\) compared to small values of the probability parameter \(p\). This is justified with comparison of the number of steps taken by the process to be in equilibrium in 14(a) with the number of steps the process take to attain in equilibrium in 14(b). In Figure 14(b) the process takes a number of steps to be equilibrium which means that the molecule is slow when the probability parameter \(p\) is small. With this fact we deduce that the movement of the molecule is faster when the value of the probability parameter \(p\) is large and slower when the value of the probability parameter \(p\) is small.

Figure 14. The panel of graphs to represent transitional probabilities against time interval of a transition matrix \(P\) with starting point at the one of the side cells, that is North-South or East-West of the central cell with with initial probability vector \(P^{0}_{3}=(0,0,1,0,0,0,0,0,0)\). From these figures state \(2\) and \(4\), \(5\)and \(9\), \(6\) and \(8\) overlaps together to form new states respectively because they have the same stochastic behavior to the process.

With these three non equivalent cells in square small cell complex we deduce that the molecule attain equilibrium faster when the particle is initially at the central cell because of the possible number of moves at the first step. This means that the number of possible cells to be visited when the particle is initially at the central cell is large compared to when the particle is initially at one of the border cell.

3.9. \(n\)-step transition matrix

From Equation (2), we have that: $$ P(X_{t+1} = j / X_{t} = i ,X_{t-1} = i-1,X_{t-2} = i-2, . . . ) = P(X_{t+1} = j / X_{t} = i)=P(i , j ),$$ where \(P(i , j )\) define the probability of the biological molecule to move from state \(i\) to state \(j\) at step one. At step \(2\), the transitional probabilities of the biological molecule to be in state \(j\) given that \(i\) is the initial state is obtained by taking the summation of the products of probabilities from state \(i\) to state \(k\) with probabilities from State \(k\) to \(j\) where \(k\) is an intermediate stop between \(i\) and \(j\).

Let \(n\) be the number of steps that the molecule pass from state \(i\) to state \(j\), then the transitional probabilities \(P(i , j )\) from state \(i\) to state \(j\) at \(n-\)steps denoted by \(P^{n}_{ij}\) can be deduced from the Chapman-Kolmogorov equation in theorem. The collection of the transitional probabilities obtained here form the elements of the transition matrix of the Markov chain. Suppose that \(K\) is the transition matrix defined at \(n=1\), then at \(2-\)steps then the \(3-\)steps transition matrix is defined by \(K^{3}\). We can define the \(n-\)steps transition matrix as \(K^{n}\), where \(K^{n}\) represents the multiplication of the matrix \(K\) itself \(n-\)times. For our case we have transition matrices \(P\) and \(A\) that describes the movement of the biological molecules at small cell complex in square and hexagonal tiling respectively. By taking transition matrix \(P\) as the reference matrix, we will derive the two step transition matrix denoted by \(P^{2}\).

Let \(P_{ik}\) be the probability of the biological molecule to be in an intermediate stop \(k\) from state \(i\) , \(P_{kj}\) be the probability of the particle to be in state \(j\) from an intermediate stop \(k\) and \(M\) be the number state states then

\begin{equation} P^{2}_{ij}= \sum_{k=1}^{M} P_{ik}P_{kj}. \label{91} \end{equation}
(51)
Taking \(k=(1,2,3,4,\dots,M)\) in Equation (51), we have:
\begin{equation} \left\{ \begin{array}{c} P^{2}_{11}= P_{11}P_{11}+P_{12}P_{21}+P_{13}P_{31}+P_{14}P_{41}+....+P_{1M}P_{M1}\\ P^{2}_{12}= P_{11}P_{12}+P_{12}P_{22}+P_{13}P_{32}+P_{24}P_{42}+....+P_{1M}P_{M2}\\ .\\ .\\ .\\ P^{2}_{MM}= P_{M1}P_{1M}+P_{M2}P_{2M}+P_{M3}P_{3M}+P_{M4}P_{4M}+....+P_{MM}P_{MM}\\ \end{array} \right. \label{92} \end{equation}
(52)
By combining the results of Equation (52), we have the following \(2-\)steps transition matrix.
\begin{gather} P^{2}= \begin{bmatrix} P^{2}_{11} & P^{2}_{12} & P^{2}_{13} & P^{2}_{14} & P^{2}_{15} & P^{2}_{16} & P^{2}_{17} & P^{2}_{18} & P^{2}_{19} & \\ P^{2}_{21}& P^{2}_{22} & P^{2}_{23} & P^{2}_{24} & P^{2}_{25} & P^{2}_{26} & P^{2}_{27} & P^{2}_{28} & P^{2}_{29} &\\ P^{2}_{31} & P^{2}_{32} & P^{2}_{33} & P^{2}_{34} & P^{2}_{35} & P^{2}_{36} & P^{2}_{37} & P^{2}_{38} & P^{2}_{39} & \\ P^{2}_{41} & P^{2}_{42} & P^{2}_{43} & P^{2}_{44} & P^{2}_{45} & P^{2}_{46} & P^{2}_{47} & P^{2}_{48} & P^{2}_{49} & \\ P^{2}_{51} & P^{2}_{52} & P^{2}_{53} & P^{2}_{54} & P^{2}_{55} & P^{2}_{56} & P^{2}_{57} & P^{2}_{58} & P^{2}_{59} & \\ P^{2}_{61}& P^{2}_{62} & P^{2}_{63} & P^{2}_{64} & P^{2}_{65} & P^{2}_{66} & P^{2}_{67} & P^{2}_{68} & P^{2}_{69} &\\ P^{2}_{71} & P^{2}_{72} & P^{2}_{73} & P^{2}_{74} & P^{2}_{75} & P^{2}_{76} & P^{2}_{77} & P^{2}_{78} & P^{2}_{79} & \\ P^{2}_{81} & P^{2}_{82} & P^{2}_{83} & P^{2}_{84} & P^{2}_{85} & P^{2}_{86} & P^{2}_{87} & P^{2}_{89} & P^{2}_{89} & \\ P^{2}_{91} & P^{2}_{92} & P^{2}_{93} & P^{2}_{94} & P^{2}_{95} & P^{2}_{96} & P^{2}_{97} & P^{2}_{98} & P^{2}_{99} & \\ \end{bmatrix} \label{93} \end{gather}
(53)
The computation continues up to \(n-\)steps transitional probabilities \(P^{n}_{ij}\) of the \(n-\)steps transition matrix, with \((n)\) being the maximum number of steps. The following general formula guide on how to obtain the entries of the \(n-\)steps transition matrix \(P^{n}\).
\begin{equation} P^{n}_{ij}= \sum_{k=1}^{M} P^{m}_{ik}P^{n-m}_{kj}. \label{97} \end{equation}
(54)
Equation (54) represent the \(n-\)steps transitional matrix of the Markov chain.
From Equation (52) and Equation (54) we observe that \(P^{n}_{ij}\) is the result of the products of transitional matrices. Without loss of generality and by avoiding tedious computations with Chapman-Kolmogorov Equation; we simply apply the matrix multiplication to the \(n^{th}-\)power to obtain the \(n-\)steps transition matrix. Therefore the \(n-\)step transition matrices for our transition matrices \(P\) and \(A\) are defined by \(P^{n}\) and \(A^{n}\) .

3.10. Lumpability of the Markov chain Under small cell complex

Theorem 8 argue that if a Markov chain \({X_t}, t\in T\) where \(T\geq0\) can be portioned into sub-process of which every process retains the Markov property, then the Markov chain \({X_t}, t\in T\) where \(T\geq0\) is Lumpable. The main idea of Lumpability of the Markov chain is for Aggregation of the Markov chain to reduce the number of state space while maintaining the Markov property.

By using the assertion of Theorem 8, we can make aggregation of the Markov chain by reducing the number of state space in the respective transition matrices \(P\) and \(A\). Lets see the aggregation of the transition matrix \(A\) with the aid of Figure 2 and the assertion of Theorem 8 by symmetries. Let \(C\) denotes the central cell and \(B\) denotes border cells of the hexagonal tiling in Equation (10). Since all the border cells are symmetric then we can define \(B'\) as the new state formed by aggregation of all the border cells. Using this idea, we consider the group \(G\) of the state symmetries of the transition matrix \(A\) with the structure of symmetric of the regular hexagon(Dihedral group of \(D_6\)) generated by the following permutation, \(\rho_1=(1)(234567)\). Where \((1)\) stand for \(C'\) and \((2,3,4,5,6,7)\) represent state \(B'\) formed by aggregation. This means that the original Markov process \(X_{t}\) is lumpable with respect to the permutation \(\rho_1=(1)(234567)\) and the partition \(C_{1}=\{\{1\},\{2,3,4,5,6,7\}\}\) . The new aggregated Markov process \(X'_{t}\) preserve the Markov property.

Figure 15 depict the new transition matrix \(A'\) formed as the aggregation of state symmetries of transition matrix \(A\).

Figure 15. Hexagonal tiling in small complex formed as the result of aggregation symmetric states

Now we are going to see how the transition matrix \(A'\) looks as the result of permutation \(\rho_1=(1)(234567)\) with starting vectors \(v'=(1,0)\). Vector \(v'\) is the result of the aggregation of the initial vector \(v=(1,0,0,0,0,0,0)\) and \(\rho_1\) is the permutation of the state symmetries of the original Markov chain.

Definition 11. A chain \(X'_t\) is said to be a trivial aggregated chain if \(X'_t\) is the aggregated chain for the Markov chain \(X_t\) and initial distribution \(\pi\), where \(\pi\) is a vector of finite probabilities [14]

To get the transition matrix \(A'\), we use the idea of lumpability to the transition matrix \(A\) defined in Equation \eqref{9}. Since state \(1\), \(2\), \(3\), \(4\), \(5\), \(6\) and \(7\) are symmetrically equivalent, then we are going to combine them together by aggregation of symmetric states. The new transition matrix \(A'\) is defined as follows:
\begin{gather} A'= \begin{bmatrix} a'_{11} & a'_{12} & \\ a'_{21}& a'_{22} & \\ \end{bmatrix} \label{98} \end{gather}
(55)
To obtain \(a'_{ij}\) we apply aggregation of the Markov chain for the states with same stochastic behavior to the process. That means that the permutations or reordering of their states probabilities are the same. With the help of \(P_{ij}\) from Equation (10), \(a'_{ij}\) from Equation (55) are defined as follows:
\begin{equation} \left\{ \begin{array}{c} a'_{11}=P_{11}\\ a'_{12}=\sum_{j=2}^{7}P_{1j}\\ a'_{21}=P_{j1}\\ a'_{22}=\sum_{j=2}^{7}P_{2j}\\ \end{array} \right. \label{99} \end{equation}
(56)
Now using the elements \(P_{ij}\) of the transition matrix \(A\) from Equation (10), we have
\begin{gather} A'= \begin{bmatrix} 1-6p & 6p & \\ p& 1-p & \\ \end{bmatrix} \label{100} \end{gather}
(57)
Using this result we are going to solve the equilibrium status of the cell complex under a new transition matrix \(A'\). Since the transition matrix has only two states, then the new normalization equation is given by:
\begin{equation} \pi'_{1}+\pi'_{2}=1 \label{101} \end{equation}
(58)
By using transition matrix \(A'\) given in Equation (57) into Equation (4), we have
\begin{equation} \left\{ \begin{array}{c} \pi_i=\pi_i *A'\\ (\pi_{1}, \pi_{2})=(\pi_{1}, \pi_{2})*A'\\ \end{array} \right. \label{102} \end{equation}
(?)
Using Equation (57) into Equation (59) lead to:
\begin{equation} \left\{ \begin{array}{c} (1-6p)\,\pi'_{1} + (p)\,\pi'_{2} =\pi'_{1} \\ (6p)\,\pi'_{1} + (1-p)\,\pi'_{2} =\pi'_{2} \\ \end{array} \right. \label{103} \end{equation}
(60)
For \(p\ne0\), we append Equation (58) to Equation (60) and then obtaining the solution of the equilibrium status as:
3.10.1. Calculation of the expected passage time and recurrence time under symmetries in Hexagonal tiling
Using the aggregated Markov process we will examine how the transitional probabilities looks like as the result of aggregation of state symmetries. Let \(T_{B'C'}\) be the time of hitting state \(C'\) while starting from state \(B'\), \(P(T_{B'C'}=n)\) be the probability of the molecule to hits cell \(C'\) from cell \(B'\) in \(n-\)steps and \(P_{B'C'}\) be the entries of the transition matrix \(A'\) as shown in Equation (57). We will derive \(P(T_{B'C'}=n)=f^{n}_{B'C'}\) with different values of \(n\). Now using the idea of Equation (7), we have
\begin{equation} f^{n}_{B'C'}=P(T_{B'C'}=n)=\sum_{k=1,k\ne C'}^{m}P_{B'k}f^{n-1}_{kC'}=\sum_{k=1,k\ne C'}^{m}P_{B'k}P(T_{kC'}=n-1). \label{108} \end{equation}
(62)
Rearrangement and simplification of Equation (62) lead to:
\begin{equation} P(T_{B'C'}=n)= P_{B'B'}\,P(T_{B'C'}=n-1). \label{110} \end{equation}
(63)
At the first step \(n=1\), the particle has to move from state \(B'\) to state \(C'\). Using these fact Equation (63) reduces to:
\begin{equation} P(T_{B'C'}=n)(p)=P_{B'C'}=p. \label{111} \end{equation}
(64)
At \(n=2\), with \(P_{ij}\) from Equation (57), Equation (63) lead to:
\begin{equation} P(T_{B'C'}=2)(p)= p(1-p). \label{114} \end{equation}
(65)
Now at \(n-\)steps, the results from Equation (64) and Equation (65) give the general form of the probability of the molecule to be in state \(C'\) from state \(B'\) at \(n-\)steps as:
\begin{equation} P(T_{B'C'}=n)(p)= p(1-p)^{n-1}. \label{117} \end{equation}
(66)
3.10.2. Calculation of expected passage time from state B' to C'
To compute the expected passage time from state \(B'\) to \(C'\) from the the cell complex defined in Figure 15 we use the following statistical formula:
\begin{equation} T_{B'C'}=\sum_{n=1}^{\infty}nP(T_{B'C'}=n). \label{118} \end{equation}
(67)
Now using Equation (66) into Equation (67) gives:
\begin{equation} T_{B'C'}=\sum_{n=1}^{\infty}np(1-p)^{n-1}. \label{119} \end{equation}
(68)
Algebraic manipulation of Equation (68) gives:
\begin{equation} T_{B'C'}(p)=1/p. \label{126} \end{equation}
(69)
3.10.3. State symmetry under hexagonal small cell complex
For Square tiling we have three non equivalent cells \(A'\), \(B'\) and \(C'\), that is the central cell, corner cells and sides cells. Using this idea we will generate the group of symmetries with the aid of Theorem 8. Now using the transition matrix \(P\) in Equation (28), we can generate the group of symmetries of the square, that is Dihedral group \(D_4\), [1]. By using Figure 8 and transition matrix \(P\) in Equation (28) with the initial vector \(P^{0}_{1}=(1,0,0,0,0,0,0,0,0)\), we can generate a group \(G\) of state symmetries of the \(P\) generated by the following permutations, \(\rho _1=(1)(2468)(3579)\). So the Markov chain \(X_{t}\) is lumpable with the partition \(C=\{\{1\},\{2,4,6,8\},\{3,5,7,9\}\}\) and the initial vector \(v=(1,0,0)\).
The Figure 16 below shows the structure of the tiling formed after aggregation.

Figure 16. Square tiling in small complex with starting point at the centre cell after aggregation

The new transition matrix \(P'\) is defined as follows:
\begin{gather} P'= \begin{bmatrix} p'_{11} & p'_{12} & p'_{13}& \\ p'_{21}& p'_{22} & p'_{23}\\ p'_{31}& p'_{32} & p'_{33} \end{bmatrix} \label{127} \end{gather}
(70)
\(a'_{ij}\) from Equation (70) are defined with the help of \(P_{ij}\) from Equation (10) as follows:
\begin{equation} \left\{ \begin{array}{c} p'_{11}=P_{11}\\ p'_{12}=P_{12}+P_{14}+P_{16}+P_{18}\\ p'_{13}=P_{13}+P_{15}+P_{17}+P_{19}\\ p'_{21}=P_{21}\\ p'_{22}=P_{22}+P_{24}+P_{26}+P_{28}\\ p'_{23}=P_{23}+P_{25}+P_{27}+P_{29}\\ p'_{31}=P_{31}\\ p'_{32}=P_{32}+P_{34}+P_{36}+P_{38}\\ p'_{33}=P_{33}+P_{35}+P_{37}+P_{39}\\ \end{array} \right. \label{128} \end{equation}
(71)
Using this result, the transition matrix \(P'\) of the aggregated process \(X'_{t}\) is given by:
\begin{gather} P'= \begin{bmatrix} 1-4p & 4p & 0& \\ p& 1-3p & 2p &&\\ 0 & 2p & 1-2p & \\ \end{bmatrix} \label{129} \end{gather}
(72)
The transition matrix \(P'\) in Equation (72) is of size \(3\times 3\) which make numerical and algebraic computations a bit less tedious compared to the original transition matrix of size \(9\times 9\). Using this result we are going to solve the equilibrium status of the cell complex under a new transition matrix \(P'\). From Equation (4), we have that \(\pi=\pi P\). By using transition matrix \(P'\) in Equation (72) into Equation (4) we have the following system of equations:
\begin{equation} \left\{ \begin{array}{c} \pi_i P'=\pi_i\\ (\pi_{1}, \pi_{2}, \pi_{3})P'=(\pi_{1}, \pi_{2}, \pi_{3}) \end{array} \right. \label{130} \end{equation}
(73)
Using Equation (72) to Equation (73) gives:
\begin{equation} \left\{ \begin{array}{c} (1-4p)\,\pi_{1} + (p)\,\pi_{2} + (0)\,\pi_{3} =\pi_{1} \\ (4p)\,\pi_{1} + (1-3p)\,\pi_{2}+ (2p)\,\pi_{3} =\pi_{2} \\ (0)\,\pi_{1} + (2p)\,\pi_{2} + (1-2p)\,\pi_{3} =\pi_{3}\\ \end{array} \right. \label{131} \end{equation}
(74)
To solve Equation (74), when \(p\ne0\) we append the normalization vector equation \(\pi\), as \(\pi _{1}+\pi _{2}+\pi _{3}=1\) to Equation (74) to get the solution as:
\begin{equation} (\pi_{1},\pi_{2},\pi_{3})=(1/9,4/9,4/9). \label{135} \end{equation}
(75)
3.10.4. Calculation of the expected passage time and recurrence time under symmetries in square tiling
Using the transition matrix \(P'\) generated after the aggregation of the Markov chain, we are going to see how the hitting time probabilities and expected passage time behave. Let \(A'\), \(B'\) and \(C'\) denotes the states of the new transition matrix formed after aggregation. Where \(A'\) represent the corner cell, \(B'\) denotes the sides cell while \(C'\) represent the central cell. Let \(P(T_jC'=n)\) for \(j=(A',B')\) denotes the probability of the molecule to move from state \(j=(A',B')\) to state \(C'\) in \(n-\)steps.
Using the idea of Equation (7), we have
\begin{equation} P(T_{jC'}=n)=\sum_{k=1,k\ne C'}^{m}P_{jk}P(T_{kC'}=n-1). \label{136} \end{equation}
(76)
Using \(j=(A',B')\) in Equation (76) and \(C'=1\), \(B'=2\) and \(A'=3\) leads to:
\begin{equation} \left\{ \begin{array}{c} P(T_{A'C'}=n)=P_{A'B'}\,P(T_{B'C'}=n-1)+P_{A'A'}\,P(T_{A'C'}=n-1)\\ P(T_{B'C'}=n)=P_{B'B'}\,P(T_{B'C'}=n-1)+P_{B'A'}\,P(T_{A,C}=n-1)\\ \end{array} \right. \label{139} \end{equation}
(77)
At \(n=1\), the particle can only move from cell \(B'\) to \(C'\) but it cannot move from cell \(A'\) to \(C'\). Using this fact we have:
\begin{equation} \left\{ \begin{array}{c} P(T_{A'C'}=1)(p)=0\\ P(T_{B'C'}=1)(p)=p \end{array} \right. \label{140} \end{equation}
(78)
\begin{equation} \left\{ \begin{array}{c} P(T_{A'C'}=3)(p)=2p^{2}(2-5p)\\ P(T_{B'C'}=3)(p)=p(1-6p+13p^{2})\\ \end{array} \right. \label{146} \end{equation}
(79)
An advantage of performing aggregation of the Markov chain is that it make numerical and algebraic computations easy by reducing the size of the transition matrix.

4. Results from numerical simulation

Under this research numerical simulation has been carried out as the way of verifying some of the algebraic results and deduce some of the results which are not easy to deduce analytically in both small cell complexes. The Table 1 show how different values of the probability parameter \(p\) affect the expected passage time from one state to another under hexagonal small cell complex.
Table 1. A table to represent the effect of changing the value of parameter \(p\) on expected passage time from one state to another under square small cell complex.
start state end state p expected passage time
2 1 0.1667 6.01
1 2 0.1667 12.95
2 5 0.1667 16.65
5 2 0.1667 16.54
2 1 0.1 10.04
1 2 0.1 21.46
2 5 0.1 28.17
5 2 0.1 27.97
2 1 0.001 100.86
1 2 0.001 213.04
2 5 0.001 283.43
5 2 0.001 279.47
The Table 2 show how different values of the probability parameter \(p\) affect the expected passage time from one state to another under square small cell complex.
Table 2. A table to represent the effect of changing the value of parameter \(p\) on expected passage time from one state to another under square small cell complex.
start state end state p expected passage time
2 1 0.25 8.27
1 2 0.25 13.00
3 1 0.25 10.01
1 3 0.25 21.13
2 1 0.1 19.84
1 2 0.1 32.52
3 1 0.1 24.88
1 3 0.1 54.38
2 1 0.001 199.72
1 2 0.001 323.69
3 1 0.001 250.52
1 3 0.001 536.33
With algebraic computations we observed that different values of the probability parameter \(p\) has no effect on the expected recurrence time. The Table 3 show the numerical simulated results to verify algebraic computations of expected recurrence time.
Table 3. A table to represent to display expected recurrence time under hexagonal and square tiling with different values of \(p\).
start state end state p ERH ERS
1 1 0.1667 7.09 9.03
1 1 0.25 7.01 9.05
1 1 0.1 7.03 8.85
2 2 0.1 6.89 8.66
7 7 0.1 7.01 8.54
1 1 0.001 7.39 9.16
3 3 0.001 6.7 8.83

The results of Table 1, 2 and 3 are very important in verification of algebraic computations. For instance from Equation (23), we have that the expected passage time from state \(j>1\) to state \(1\) is given by \(T_{j1}=1/p\). We are going to use the results of Table 1 to verify this result as follows:
From Table 1, we have that with \(p=0.1\) the expected passage time from state \(2\) to \(1\) is \(10.04\). If we use \(p=0.1\) to Equation (23), we have that \(T_{j1}=10\) which is not so much different with the numerical results, which means that algebraic computations of expected passage time under hexagonal tiling are correct.

We can verify the algebraic results of Equation (47) from numerical results presented in Table 2. From Table 2, we have that the expected passage time from state \(2\) to \(1\) with \(p=0.25\) is \(8.27\), and from Equation (47), we have that \(T_{21}=2/p\). With \(p=0.25\), \(T_{21}=8\) which is almost the same with the numerical results. One can also verify numerical results for recurrence time using Table 3.

4.1. Comparison of both tiling

Under small cell complex, with the aids of graphs of probability against time, we recognize that the movement of a molecule in hexagonal tilling is faster than in square tilling. This is because the attainment of stationary distribution in hexagonal tilling takes few steps as compared to the number of steps that the molecule takes to be in equilibrium in square tiling. For example in Figure 4(f) that depict the movement in hexagonal tiling with the value of the probability parameter \(p=\frac{1}{20}\), the number steps to attains the equilibrium is less than \(15\) while in Figure 14(d) that shows the movement in square tiling with the same probability parameter \(p=\frac{1}{20}\), the number of steps that the process take to attain equilibrium status is more than \(20\).

Using the idea of expected passage time and recurrence time, we observe that in Hexagonal tiling the expected passage time from one state to another is small compared to the one needed in square tilling. And the expected time for the first return to the same state in Hexagonal tilling is slightly smaller than the one in Square tilling. For example in Equation (23), we observe that expected passage time is small compared to the one in Equation (47) and the same to the recurrence time in Equation (27) is smaller than the one in Equation (50).

Generally the movement of the particle under hexagonal tiling is seem to be faster than the movement of the particle under square tiling because at step one, the possible number of possible cells the particle can move to under hexagonal is six compared to four under square tiling. Which implies that the movement of the particle under hexagonal covers many cells within few steps than under square tiling.

5. Discussion of the investigations

Under this section we are going to discus various results that are obtained both numerically and analytically under hexagonal and square small cell complexes.

5.1. Impact of changing the value of probability parameter \(p\) toward attainment of stationary distribution.

We discovered that changing the value of the probability parameter \(p\) has a direct impact on the movement of the molecule over the cell complex in attainment of equilibrium status. With this essay we observed that when the probability parameter \(p\) is large then the process is likely to attain equilibrium faster than when \(p\) is small. If we refer to Figures 4(a) and 4(c) of which both display plots of probabilities against time under hexagonal tiling, we observe that the time taken for the process to be in equilibrium status under 4(a) is smaller compared to the time in Figure 4(f). This is due to the variation of the value of the probability parameter used in both cases. The case is the same for square tiling.

5.2. Impact of the parameter \(p\) in numerical simulation

With the aid of numerical simulation, the value of \(p\) has the direct impacts on the number of transits for the molecule to move from one state to another. When the probability parameter \(p\) is small, the number of moves to the next state is large unlike to when \(p\) is large.

5.3. Changing the value of parameter \(p\) in numerical simulation under hexagonal tiling

Table 1 show how the value of the probability parameter \(p\) affect the movement of the particle from different state to the neighboring state. From Table 1 we observed that when the probability parameter \(p\) is small then expected number of transition from one state to another is high compared to the large value \(p\). This is because when \(p\) is small then the movement of the molecule is slow unlike when \(p\) is large. Direct from Table 1 we observe that the expected number of transition from state \(2\) to state \(1\) is not equal to the expected number of transition from state \(1\) to \(2\), this might be because of the different paths a molecule use while in movement.

5.4. Changing the value of parameter \(p\) in numerical simulation under square tiling

Under simulation of hitting time in square small cell complex the value of the probability parameter \(p\) has direct impacts on the number of transitions from one state to to another state. As in hexagonal small cell complex different values of the probability \(p\) alter the speed of the molecule while in motion. In this case the smaller the value of the probability parameter \(p\), the larger the expected number of transitions and vice versa.

Table 2 show how different values of the probability parameter \(p\) affect the expected passage time from one state to another state. From Table 2 , we observe that as the probability parameter \(p\) decreases, the number of transitions that are expected for the particle to be at a certain from the initial defined state increases. With this fact we deduce that the probability parameter \(p\) is inversely proportional to the number of transitions.

5.5. Numerical simulation of expected recurrence time with different values of the probability parameter \(p\)

With algebraic computations we observed that the expected number of transitions for the first return to the same state, that is expected recurrence time to state \(1\) is \(7\) for hexagonal tiling and \(9\) for square small cell complexes as defined in Equation (27) and Equation (50) respectively.

Table 3 render the expected recurrence time with different values of the parameter \(p\) in both hexagonal and square small cell complex, which has no visible different with the one defined in Equation (27) and Equation (50).

5.6. Expected passage time from state \(i\) to \(j\) and from state \(j\) to \(i\) with relation to symmetries

Basing on the results from numerical simulation, we observed that when two states \(i\) and \(j\) are symmetric, that is they have the same stochastic behavior to the process then the expected passage time from state \(i\) to state \(j\) is the the same as the expected passage time from state \(j\) to state \(i\). But if state \(i\) and \(j\) are non equivalents states then the expected passage time from state \(i\) to \(j\) is not equal to the expected passage time from state \(j\) to state \(i\). The reason behind the differences is that that when two states \(i\) and \(j\) have the same stochastic behavior to the process then their paths are likely to be similar, but if \(i\) and \(j\) are non equivalent states then their paths are not the same.

5.7. Comparison of results from numerical simulation with results from algebraic computations.

This essay comprise algebraic computation and numerical simulation. The purpose of numerical simulation is to verify algebraic computation. With numerical simulation we generated some results that are useful in testing whether algebraic results are correct . For example the expected passage time under hexagonal tiling as defined in Equation (23) can be verified by complementing it with the results from numerical simulation defined in Table 1. The results depicted in Table 1 with different values of the probability parameter \(p\) approximate the expected passage time defined in Equation (23). One can verify easily for square tiling.

5.8. Effect of changing the initial position of the molecule under small cell Complex.

Changing the starting point of the movement of the particle under small cell complex has direct impact on the movement of the molecule. One of the viable effect is the attainment of stationary distribution. By considering Figures 4 and 5 of which the starting point is the central cell and one of the border cells respectively, we observed that when the particle is initially at the central cell, then the stationary distribution is arrived faster as seen in Figure 4 than when molecule is at one of the border cells as portrayed in Figure 5. This is because when the molecule is initial at the central cell, that is cell \(1\) as in Figure 4, the probability of moving from state \(1\) to either of the other states is the same at step one, while when the particle is initially at one of the border cell it is not possible to visit to either of the other states at step one, means the probability is not the same. This means that when the particle is initially at the central cell, the are are numerous possible number of cells to be visited at step one compared to when the molecule is initially at one of the border cells. In this case the value of the probability parameter \(p\) is assumed to be constant. The same effect observed under square tiling.

5.9. Impact of the number cells in the cell complex for the molecule to return to the same state.

Under this study we observed that the number of cell within the respective cell complex has a direct impact on the expected recurrence time of the molecule. Basing on the results from this essay we realized that the expected recurrence time is equal to the number states or cells in the cell complex. To verify this we refer to algebraic results presented in Equation (27) and Equation (50) for hexagonal and square tiling respectively. Under hexagonal tiling we have \(7\) states and the expected recurrence time is \(7\) as in Equation (27), while in square tiling we have \(9\) states and the recurrence time is \(9\) as in Equation (50).

5.10. Effect of symmetries on the cell complex.

When the cell structure is large, algebraic calculations becomes tedious and cumbersome. In this essay we observed that aggregation of state symmetries is a very useful and meaningful technique to handle algebraic computations of the the Markov chain \(X_{t}\) with large state space \(S\). Aggregation of state symmetries of the Markov chain \(X_{t}\) reduced the difficulties and time for algebraic computations of various results. Among the impact of aggregation state symmetries in this essay are observed in Equation (72) and Equation (57) which are the new transitions matrix \(P'\) and \(A'\) of the original transition matrices \(P\) and \(A\) of square and hexagonal tiling respectively as defined in Equation (28) and Equation (10) respectively. Before aggregation of state symmetries of the transition matrices \(P\) and \(A\) as defined in Equation (28) and Equation(10) respectively, it was very difficult to derive the equilibrium status of the Markov process as defined in Equation (29) and Equation (14) respectively. But after obtaining the new transition matrices \(P'\) and \(A'\) as defined in Equation (72) and Equation (57) respectively; it became easier to arrive to equilibrium status as defined in Equation (75) and Equation (61) respectively.

6. Conclusion

At the end of this study we observed that the value of the probability parameter \(p\) has a big influence in the fastness of the molecule in both tilings. We observed when the value of the probability parameter \(p\) is small, the speed of the particle is very slow and vice versa. So the attainment of stationary distribution is affected with the value of the probability parameter \(p\), that is the larger the value of the probability parameter, the faster the equilibrium is achieved and vice versa. Another thing we noticed is that the value of the probability parameter has no effect on expected recurrence time. This means that regardless of the value of the probability parameter \(p\) the expected recurrence time always remains the same. We also observed that the position of the molecule at initial step has a big influence on the attainment of stationary distribution. With this study we noticed that when the molecule is initially at the central cell, then attainment of equilibrium is faster than when the particle is initially at one of the border cells. This is because the number of possible cells to be visited at the first step when the particle is initially at the central cell is large compared to when the particle is initially at one of the border cells. We also realized that the process is likely to attains equilibrium in hexagonal tiling than in square tiling due to the different in the number of state spaces and the possible cell to be visited when the particle is initially at the central cell.

This work was very interdisciplinary because many discipline of mathematics were employed for its completion. Among the discipline of mathematics employed in this essay are group theory, probability theory, differentiation of functions and linear algebra. In this work group theory was used to understand the concept of state symmetries and permutation. Linear algebra helped in the whole concept of matrices, vectors and solving the system of linear equations, differentiation helped in generalization some geometrical series while probability theory was useful in understanding stationary distribution.

Acknowledgments

The authors would like to express their thanks to the referee for his useful remarks.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Ring, A. (2004). State symmetries in matrices and vectors on finite state spaces. arXiv preprint math/0409264. [Google Scholor]
  2. Beneš, V. E. (1978). Reduction of network states under symmetries. Bell System Technical Journal, 57(1), 111-149. [Google Scholor]
  3. Barr, D. R., & Thomas, M. U. (1977). An eigenvector condition for Markov chain lumpability. Operations Research, 25(6), 1028-1031. [Google Scholor]
  4. Karmanov, A. V., & Karmanova, L. A. (2005). Estimation of finite probabilities via aggregation of Markov chains. Automation and Remote Control, 66(10), 1640-1646. [Google Scholor]
  5. Schweitzer, P. J. (1983, September). Aggregation methods for large Markov chains. In Proceedings of the International Workshop on Computer Performance and Reliability (pp. 275-286). North-Holland Publishing Co. [Google Scholor]
  6. Sumita, U., & Rieders, M. (1989). Lumpability and time reversibility in the aggregation-disaggregation method for large Markov chains. Stochastic Models, 5(1), 63-81.[Google Scholor]
  7. Ching, W. K., Huang, X., Ng, M. K., & Siu, T. K. (2013). Higher-order markov chains. In Markov Chains (pp. 141-176). Springer, Boston, MA. [Google Scholor]
  8. Snodgrass, S., & Ontañón, S. (2014). Experiments in map generation using Markov chains. In FDG.[Google Scholor]
  9. Snodgrass, S., & Ontañón, S. (2014, September). A hierarchical approach to generating maps using markov chains. In Tenth Artificial Intelligence and Interactive Digital Entertainment Conference. [Google Scholor]
  10. Kayibi, K., Samee, U., Merajuddin, M., & Pirzada, S. (2019). Generalized Dominoes Tiling'S Markov Chain Mixes Fast. Journal of applied mathematics & informatics, 37(5-6), 469-480. [Google Scholor]
  11. Kayibi, K. K., & Pirzada, S. (2018). T-tetrominoes tiling's Markov chain mixes fast. Theoretical Computer Science, 714, 1-14.[Google Scholor]
  12. Kayibi, K. K., & Pirzada, S. (2012). Planarity, symmetry and counting tilings. Graphs and Combinatorics, 28(4), 483-497. [Google Scholor]
  13. Dyer, M., Kannan, R., & Mount, J. (1997). Sampling contingency tables. Random Structures & Algorithms, 10(4), 487-506. [Google Scholor]
  14. Harris, T. E. (1952). First passage and recurrence distributions. Transactions of the American Mathematical Society, 73(3), 471-486.[Google Scholor]
]]>
Optimal control analysis of combined anti-angiogenic and tumor immunotherapy https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/optimal-control-analysis-of-combined-anti-angiogenic-and-tumor-immunotherapy/ Sat, 30 Nov 2019 10:14:07 +0000 https://old.pisrt.org/?p=3499
OMS-Vol. 3 (2019), Issue 1, pp. 349 – 357 Open Access Full-Text PDF
Anuraag Bukkuri
Abstract: The author considers a mathematical model of immunotherapy and anti-angiogenesis inhibitor therapy for cancer patients over a fixed time horizon. Disease dynamics are captured by a system of ODEs developed in [1], describing dynamics among host cells, cancer cells, endothelial cells, effector cells, and anti-angiogenesis. Existence, uniqueness, and characterization of optimal treatment profiles that minimize the tumor and drug usage, while maintaining healthy levels of effector and host cells are determined. A theoretical analysis is performed to characterize the optimal control. Numerical simulations are performed to illustrate optimal control profiles for a variety of different patients, each leading to different treatment protocols.
]]>

Open Journal of Mathematical Sciences

Optimal control analysis of combined anti-angiogenic and tumor immunotherapy

Anuraag Bukkuri\(^1\)
Department of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA.

\(^{1}\)Corresponding Author: bukku001@umn.edu

Abstract

The author considers a mathematical model of immunotherapy and anti-angiogenesis inhibitor therapy for cancer patients over a fixed time horizon. Disease dynamics are captured by a system of ODEs developed in [1], describing dynamics among host cells, cancer cells, endothelial cells, effector cells, and anti-angiogenesis. Existence, uniqueness, and characterization of optimal treatment profiles that minimize the tumor and drug usage, while maintaining healthy levels of effector and host cells are determined. A theoretical analysis is performed to characterize the optimal control. Numerical simulations are performed to illustrate optimal control profiles for a variety of different patients, each leading to different treatment protocols.

Keywords:

Optimal control, immunotherapy, anti-angiogenesis, combination therapy.

1. Introduction

Angiogenesis is an intricate process in the human body in which endothelial cells proliferate, migrate and remodel themselves from from pre-existing blood vessels, formed during early vasculogenesis. It is involved in a variety of healthy physiological functions such as embryonic development, wound healing, and collateral formation for improved organ perfusion [2]. However, it is also one of the hallmarks of metastatic cancer: the process of angiogenesis provides the tumor with the needed oxygen and nutrients to grow and metastasize to other parts of the body. In other words, the formation of these new blood vessels serves as the principle route for the tumor cells to exit the primary tumor site and enter circulation [3]. Figure 1 from [2] gives a concise picture of the process of tumor angiogenesis:

Figure 1. Depiction of the Tumor Angiogenesis Process

Recently, much biomedical research has gone into creating effective anti-angiogenesic inhibitors. As a result, a plethora of anti-angiogenic agents such as Avastin, Nexavar, and Zaltrap have been developed. The effectiveness of these agents have been analyzed to be mediocre, at best, and it has been shown that pharmacologic anti-angiogenic protocols which arrest tumor progression is often not enough to eradicate tumors [2]. As such, physicians are now starting to use anti-angiogenic inhibitors in conjunction with other therapies such as immunotherapy or chemotherapy [4]. A recent mathematical modeling paper showed the effectiveness of anti-angiogenic drugs when used in combination with immunotherapy, displaying through a bifurcation analysis how the former supplements the latter, leading to earlier tumor remission [1]. However, the paper does not analyze when this combination treatment should be used in the clinic, i.e., when used with immunotherapy, when does the anti-angiogenic treatment cause more harm than benefits? Through optimal control analysis, this is the question we attempt to answer here. The paper is organized as follows: first, the mathematical model proposed in [1] is summarized along with its nondimensionalization. Then, an analytic optimal control analysis is performed, giving optimality conditions for the administration of immunotherapy and anti-angiogenic drugs. Then, parameter values determined in [1] are provided and used to perform numerical simulations for a control patient, and for patients with different side effects for each treatment. Finally, a brief conclusion is given for this paper.

2. Model Description

The author analyzes the model originally constructed in [1]. This model considers dynamics among host cells \((x)\), cancer cells \((y)\), endothelial cells \((z)\), effector cells \((v)\), and anti-angiogenesis \((w)\). The model is reproduced below: \begin{eqnarray*}&&\frac{dX}{dt}= \alpha_1X\Big(1-\frac{X}{K_1}\Big)-Q_1XY,\\ &&\frac{dY}{dt}= \alpha_2Y\Big(1-\frac{Y}{K_2+BZ}\Big)-Q_2XY-Q_3YV+P_2YZ,\\ &&\frac{dZ}{dt}= CY+\alpha_3Z\Big(1-\frac{Z}{K_3}\Big)-\frac{P_3ZW}{A_3+Z},\\ &&\frac{dV}{dt} = S_1+RY-D_4V,\\ &&\frac{dW}{dt} = S_2-\frac{P_5ZW}{A_3+Z}-D_5W.\end{eqnarray*}

In this model, \(\alpha_i\) represent the natural growth rate of the populations, \(K_i\) represent the carrying capacities, and \(Q_i\) represent intracellular competition. B represents the portion of the endothelial cell population responsible for tumor angiogenesis, \(P_2\) is the promotion coefficient of endothelial cells to cancer cells, C is the rate of production of cancer cells due to endothelial cells, R represents the rate of recruitment of effector cells due to cancer cells, \(D_{4,5}\) are the washout rates of effector cells and the anti-angiogenic agent, and the \(\frac{P_iZW}{A_3+Z}\) are the functions of Holling II functional response, capturing the saturating effects of the anti-angiogenic response. \newline

To further clarify the dependence of the system on parameters, and to improve the performance of numerical methods, the system is non-dimensionalized as follows. The nondimensionalzed state variables are \(x=\frac{X}{K_1}\), \(y=\frac{Y}{K_2}\), \(z=\frac{Z}{K_3}\), \(v=V\), \(w=W\) and the corresponding parameters are \(q_1=Q_1K_2\), \(\gamma=\frac{BK_3}{K_2}\), \(q_2=Q_2K_1\), \(q_3=Q_3\), \(p_2=P_2K_3\), \(\beta=\frac{CK_2}{K_3}\), \(p_3=\frac{P_3}{K_3}\), \(a_3=\frac{A_3}{K_3}\), \(d_i=D_i\), \(r=K_2R\), \(p_5=P_5\). Then, the nondimensionalized system is given by: \begin{eqnarray*} \frac{dx}{dt} &=& \alpha_1x(1-x)-q_1xy,\\ \frac{dy}{dt} &=& \alpha_2y\Big(1-\frac{y}{1+\gamma z}\Big)-q_2xy-q_3yv+p_2yz,\\ \frac{dz}{dt} &=& \beta y+\alpha_3z(1-z)-\frac{p_3zw}{a_3+z},\\ \frac{dv}{dt} &=& S_1+ry-d_4v,\\ \frac{dw}{dt} &=& S_2-\frac{p_5zw}{a_3+z}-d_5w.\end{eqnarray*}

3. Optimal control analysis

Here, we formulate the problem of constructing the most effective treatment regimen as an optimal control problem. To do this, we first modify the model slightly to incorporate the controls by adding "scaling control" factors to the \(S_1\) and \(S_2\) therapy administration terms: \begin{eqnarray*}\frac{dx}{dt} &=& \alpha_1x(1-x)-q_1xy,\\ \frac{dy}{dt} &=& \alpha_2y\Big(1-\frac{y}{1+\gamma z}\Big)-q_2xy-q_3yv+p_2yz,\end{eqnarray*} \begin{eqnarray*}\frac{dz}{dt} &=& \beta y+\alpha_3z(1-z)-\frac{p_3zw}{a_3+z},\\ \frac{dv}{dt} &=& u_{imm}(t)S_1+ry-d_4v,\\ \frac{dw}{dt} &=& v_{ang}(t)S_2-\frac{p_5zw}{a_3+z}-d_5w.\end{eqnarray*}

We want to maximize the number of host and effector cells while minimizing the number of cancer cells and amount of anti-angiogenic inhibitor and immunotherapy prescribed over a fixed therapy interval \([0,T].\)

We choose as our control class piecewise continuous functions defined for all \(t\) such that \(0 \leq u_{imm}(t) \leq 1\) where \(u_{imm}(t)= 1\) represents maximal chemotherapy and \(u_{imm}(t) = 0\) represents no chemotherapy. Similarly, \(v_{ang}(t) = 0\) represents no anti-angiogeneic inhibitors and \(v_{ang}(t) = 1\) represents maximal anti-angiogenic drug. Thus, we depict the class of admissible controls as: \begin{equation*} U(t) = u_{imm}(t), v_{ang}(t)\;\; \text{piecewise continuous such that}\;\; 0 \leq u_{imm}(t), v_{ang}(t) \leq 1, \forall t \in [0, T] \end{equation*}

Now, we define my objective functional and optimal control problem. For a fixed therapy horizon \([0,T],\) maximize the objective functional:
\begin{equation} J(u,v) = \int_{0}^{T} \alpha v+\theta x-\frac{1}{2}b_1u_{imm}^2-\frac{1}{2}b_2v_{ang}^2-\psi y dt \end{equation}
(1)
over all Lebesgue-measurable functions \(u:[0,T] \rightarrow [0,u_{max}]\) and \(v:[0,T] \rightarrow [0,v_{max}]\) subject to the above ODE dynamics and initial conditions of \(x(0) = 0.8, y(0) = 0.0006, z(0) = v(0) = w(0) = 0.\)

3.1. Existence of optimal control

The existence of an optimal control for the state system is analyzed using the theory developed in [5]. The boundedness of solutions of the system for finite time is needed to obtain existence and uniqueness of an optimal control. Using \(y(t)< K_2\), the carrying capacity of the cancer cell population, upper bounds on the solutions of the state system are determined. \begin{eqnarray*} \frac{d\bar{x}}{dt} &=& \alpha_1\bar{x},\\ \frac{d\bar{y}}{dt} &=& \alpha_2\bar{y}+p_2\bar{z}K_2,\\ \frac{d\bar{z}}{dt} &=& \beta \bar{y}+\alpha_3\bar{z},\\ \frac{d\bar{v}}{dt} &=& S_1+r\bar{y},\\ \frac{d\bar{w}}{dt} &=& S_2.\end{eqnarray*} The supersolutions \(\bar{x},\bar{y},\bar{z},\bar{v},\bar{w}\) of the above system are bounded on a finite time interval. The system can also be written, where \('=\frac{d}{dt}\), as
\begin{equation} \begin{pmatrix} \bar{x} \\ \bar{y} \\ \bar{z} \\ \bar{v} \\ \bar{w} \end{pmatrix}^{\!'} = \begin{pmatrix} \alpha_1 & 0 & 0 & 0 & 0\\ 0 & \alpha_2 & p_2K_2 & 0 & 0\\ 0 & \beta & \alpha_3 & 0 & 0\\ 0 & r & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ \end{pmatrix} \begin{pmatrix} \bar{x} \\ \bar{y} \\ \bar{z} \\ \bar{v} \\ \bar{w} \end{pmatrix} + \begin{pmatrix} 0 \\ 0 \\ 0 \\ S_1 \\ S2 \end{pmatrix} \end{equation}
(2)
Since this is a linear system in finite time with bounded coefficients, the supersolutions \(\bar{x},\bar{y},\bar{z},\bar{v},\bar{w}\) are uniformly bounded. Using that the solution to each state equation is bounded, we now prove the existence of an optimal control.

Theorem 1. There exists optimal controls \(u_{imm}\) and \(v_{ang}\) that maximizes the objective functional J(u,v) if the following conditions are met:

  1. The class of all initial conditions with controls \(u_{imm}\) and \(v_{ang}\) such that \(u_{imm}\) and \(v_{ang}\) are Lebesgue integrable functions on [0,T] with values in the admissible control set along with each state equation being satisfied is not empty
  2. The admissible control set is closed and convex
  3. The right hand side of the state system is continuous, is bounded above by a sum of the bounded control and the state, and can be written as a linear function of \(u_{imm}\) and \(v_{ang}\) with coefficients depending on time and the state variables
  4. The integrand of the functional is concave on the admissible control set and is bounded above by \(c_3 - c_2|u_{imm}|^\theta - c_1|v_{ang}|^\phi\), where \(c_1, c_2 > 0\), and \(\theta, \phi > 1\).

Proof. First, from a result in [6], since our state system has bounded coefficients and any solutions are bounded on the finite time interval, we obtain the existence of the solution of the state system. Second, the admissible control set is closed and convex, by definition. For the third condition, the right hand side of the state system is continuous since each term with a denominator is nonzero. Moreover, the system is bilinear in the controls and can be rewritten as

\begin{equation} \bar{f}(t,\bar{X},u_{imm},v_{ang}) = \bar{\gamma}(t,\bar{X})+S_1u_{imm}+S_2v_{ang} \end{equation}
(3)
where \(\bar{X} = (x,y,z,v,w)\) and \(\bar{\gamma}\) is a vector-valued function of \(\bar{X}\). Since the solutions are bounded, we have
\begin{equation} |\bar{f}(t,\bar{X},u_{imm},v_{ang})|\leq \left| \begin{pmatrix} \alpha_1 & 0 & 0 & 0 & 0\\ 0 & \alpha_2 & p_2K_2 & 0 & 0\\ 0 & \beta & \alpha_3 & 0 & 0\\ 0 & r & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ v \\ w \end{pmatrix}\right| +\left|\begin{pmatrix} 0 \\ 0 \\ 0 \\ S_1u_{imm} \\ S2v_{ang} \end{pmatrix}\right| \leq C_1|\bar{X}|+S_1|u_{imm}|+S_2|v_{ang}| \end{equation}
(4)
where \(C_1\) depends on the coefficients of the system. Also, note that the integrand of J(u,v) is concave on the admissible control set. The existence of optimal control follows from the fact that \(\alpha v+\theta x-\frac{1}{2}b_1u_{imm}^2-\frac{1}{2}b_2v_{ang}^2-\psi y \leq c_3 - c_2|u_{imm}|^\theta - c_1|v_{ang}|^\phi\), where \(c_1, c_2 > 0\), and \(\theta, \phi > 1\) since \(y(t) \leq K_2\).

3.2. Characterization of optimal control

Now, we characterize the optimal control pair \((u_{imm},v_{ang})\). Conditions for optimality are determined by a version of the Pontgaryin maximum principle. The existence of an optimal control pair is ensured by the compactness of the control and state spaces, in addition to the convexity of the problem. First, we define the Lagrangian associated with \(J(u,v)\) and our ODE model as follows:
\(L = \alpha v+\theta x-\frac{1}{2}b_1u_{imm}^2-\frac{1}{2}b_2v_{ang}^2-\psi y + \lambda_1(\alpha_1x(1-x)-q_1xy) + \lambda_2\Big(\alpha_2y\Big(1-\frac{y}{1+\gamma z}\Big)-q_2xy-q_3yv+p_2yz\Big) + \lambda_3\Big(\beta y+\alpha_3z(1-z)-\frac{p_3zw}{a_3+z}\Big) + \lambda_4(u_{imm}(t)S_1+ry-d_4v) + \lambda_5\Big(v_{ang}(t)S_2-\frac{p_5zw}{a_3+z}-d_5w\Big) + j_1(t)(u_{imm})+j_2(t)(1-u_{imm})+k_1(t)v_{ang}+k_2(t)(1-v_{ang}).\) Note that the last \(j\) and \(k\) terms have been added in to serve as penalty multipliers for non-optimal controls: \begin{equation*} j_1(t)u_{imm} = j_2(t)(1-u_{imm}) = k_1(t)v_{ang} = k_2(t)(1-v_{ang}) = 0 \end{equation*} for the optimal controls \((u_{imm}^*, v_{ang}^*)\).

Theorem 2.Given optimal controls \(u_{imm}^*\) and \(v_{ang}^*\) and solutions of the corresponding state system, there exist adjoint variables \(\lambda_i\) for i = 1, 2, 3, 4, 5 satisfying: \begin{eqnarray*} \frac{d\lambda_1}{dt} = -\frac{\partial L}{\partial x} &=& -[\theta+ \lambda_1(\alpha_1(1-2x)-q_1y)-\lambda_2yq_2],\\ \frac{d\lambda_2}{dt} = -\frac{\partial L}{\partial y} &=& -\Big[-\psi -\lambda_1(q_1x) - \lambda_2\Big(\alpha_2\Big(1-\frac{2y}{1+\gamma z}\Big)-q_2x-q_3v+zp_2\Big) + \lambda_3 \beta + \lambda_4 r\Big],\\ \frac{d\lambda_3}{dt} = -\frac{\partial L}{\partial z} &=& -\Big[\lambda_2\Big(\frac{\alpha_2y^2\gamma}{(\gamma z+1)^2}+yp_2\Big)+\lambda_3\Big(\alpha_3(1-2z)-\frac{a_3p_3w}{(a_3+z)^2}\Big)+\lambda_5\Big(-\frac{a_3p_5w}{(a_3+z)^2}\Big)\Big],\end{eqnarray*} \begin{eqnarray*} \frac{d\lambda_4}{dt} &=& -\frac{\partial L}{\partial v} = -[\alpha - \lambda_2q_3y - \lambda_4d_4],\\ \frac{d\lambda_5}{dt} &=& -\frac{\partial L}{\partial w} = -\Big[\lambda_3\Big(-\frac{p_3z}{a_3+z}\Big)-\lambda_5\Big(\frac{p_5z}{a_3+z}+d_5\Big)\Big],\end{eqnarray*} where \(\lambda_i(T)=0\) for i=1, 2, 3, 4, 5 by the PMP transversality condition. Furthermore, from the optimality condition, \(u_{imm}^*\) is given by:

\begin{equation} u_{imm}^* = min\Big(max\Big(0,\frac{\lambda_4S_1}{b_1}\Big),1\Big), \end{equation}
(5)
while \(v_{ang}^*\) is similarly given by:
\begin{equation} v_{ang}^* = min\Big(max\Big(0,\frac{\lambda_5S_2}{b_2}\Big),1\Big). \end{equation}
(6)

Proof. Since the state variables are bounded, the maximum principle guarantees existence of the adjoint variables, as described above. The Lagrangian was maximized with respect to the variables in the optimal control pair by differentiating L with respect to \(u_{imm}\) and \(v_{ang}\). By doing this, we get the following:

\begin{equation} \frac{\partial L}{\partial u_{imm}} = -b_1u_{imm}+\lambda_4S_1+j_1(t)-j_2(t). \end{equation}
(7)
Thus, the representation of \(u_{imm}^*\) is \(\frac{\lambda_4S_1}{b_1}\).
\begin{equation} v_{ang}^* = -b_2v_{ang}+\lambda_5S_2+k_1(t)-k_2(t). \end{equation}
(8)
And thus the representation of \(v_{ang}^*\) is \(\frac{\lambda_5S_2}{b_2}\). Then, by using the bounds \(0 \leq u_{imm} \leq 1\) and \(0 \leq v_{ang} \leq 1\), we obtain the explicit control profiles given in Equations 2 and 6.

Since both state and adjoint solutions are \(L^\infty\)-bounded, the right side of the adjoint and state equations are Lipschitz for those solutions. This furthermore ensures that the solution of the optimality system is unique, given that the final time is not very large. A rigorous proof of such an argument can be found in [7] and [8]. Obviously, the uniqueness of the solutions for the optimality system implies uniqueness of the optimal control pair. Now, we have an explicit formulation for optimal controls, coupling the adjoint with the state equations and the initial and transversality conditions give the following optimality system: \begin{eqnarray*} \frac{dx}{dt} &=& \alpha_1x(1-x)-q_1xy,\\ \frac{dy}{dt} &=& \alpha_2y\Big(1-\frac{y}{1+\gamma z}\Big)-q_2xy-q_3yv+p_2yz,\\ \frac{dz}{dt} &=& \beta y+\alpha_3z(1-z)-\frac{p_3zw}{a_3+z},\\ \frac{dv}{dt}& =& min\Big(max\Big(0,\frac{\lambda_4S_1}{b_1}\Big),1\Big)S_1+ry-d_4v,\\ \frac{dw}{dt} &=& min\Big(max\Big(0,\frac{\lambda_5S_2}{b_2}\Big),1\Big)S_2-\frac{p_5zw}{a_3+z}-d_5w,\\ \frac{d\lambda_1}{dt} &=& -\frac{\partial L}{\partial x} = -[\theta+ \lambda_1(\alpha_1(1-2x)-q_1y)-\lambda_2yq_2],\\ \frac{d\lambda_2}{dt} &=& -\frac{\partial L}{\partial y} = -\Big[-\psi -\lambda_1(q_1x) - \lambda_2\Big(\alpha_2\Big(1-\frac{2y}{1+\gamma z}\Big)-q_2x-q_3v+zp_2\Big) + \lambda_3 \beta + \lambda_4 r\Big],\\ \frac{d\lambda_3}{dt} &=& -\frac{\partial L}{\partial z} = -\Big[\lambda_2\Big(\frac{\alpha_2y^2\gamma}{(\gamma z+1)^2}+yp_2\Big)+\lambda_3\Big(\alpha_3(1-2z)-\frac{a_3p_3w}{(a_3+z)^2}\Big)+\lambda_5\Big(-\frac{a_3p_5w}{(a_3+z)^2}\Big)\Big],\\ \frac{d\lambda_4}{dt} &=& -\frac{\partial L}{\partial v} = -[\alpha - \lambda_2q_3y - \lambda_4d_4],\\ \frac{d\lambda_5}{dt} &=& -\frac{\partial L}{\partial w} = -\Big[\lambda_3\Big(-\frac{p_3z}{a_3+z}\Big)-\lambda_5\Big(\frac{p_5z}{a_3+z}+d_5\Big)\Big]. \end{eqnarray*}

4. Parameter values

We obtain the parameter values given in Table 1 to be used in the numerical optimal control and simulation experiments from [1].
Table 1. Parameters Used in Numerical Simulations.
Parameter Estimated Value
\(\alpha_1\) 6800
\(\alpha_2\) 0.01
\(\alpha_3\) 0.002
\(r\) 0.002
\(p_5\) 0.032
\(q_1\) 0.0072
\(q_2\) 0.00072
\(\beta\) 0.004
\(d_4\) 0.0132
\(d_5\) 0.136
\(q_3\) 0.01
\(p_3\) 1.8
\(p_2\) 0.002
\(a_3\) 0.49
\(\gamma\) 0.15
\(S_1\) 0.017
\(S_2\) 0.07

5. Numerical simulations

Several numerical simulations have been performed using the Python GEKKO optimization suite for different hypothetical patients. Though the model parameters remained constant for all patients, the weights in the objective functional were modified. All parameter values used can be found in Table 1; the initial values used were \(x = 0.8\), \(y = 0.0006\), \(z = v = w = 0\).

5.1. Control simulations

The first simulation run, shown in Figure 2, was the control simulation. In this simulation, the following values were used for the weighting parameters of the objective functional: \(\alpha = 1.2\), \(b_1 = 1.5\), \(b_2 = 1.5\), \(\theta = 1.3\), and \(\psi = 3.0\).

Figure 2. Numerical Optimal Control and Treatment Simulation Results – Control Case

The figure on the left represents the overall dynamics of the system along with the optimal control profile; the right figure zooms in on the anti-angiogenesis and cancer and endothelial cells. As we can see, in this case, optimal control profile indicates that virtually no anti-angiogenic therapy should be used, while the immunotherapy should be used at full potential for the first \(\approx 600\) days, before quickly tapering off to 0 by the 800th day. We note that the effector and host cell population are at healthy levels, with the endothelial population rising and the tumor population reaching zero.

5.2. Patient-specific side effect simulations

The second simulation run was one in which there was an especially low immunotherapy side effect. The same weighting values as the control were used, but the \(b_1\) parameter was reduced to 0.5. The results can be seen below in Figure 3:

Figure 3. Numerical Optimal Control and Treatment Simulation Results – Low Immunotherapy Side Effect Case

Here, we see a similar optimal control profile to the control, but the immunotherapy treatment was used for a much longer time (until \(\approx 780\) days), before the treatment was stopped much more abruptly; no anti-angiogenic treatment was detected at all in this case. The final dynamics observed healthy levels of effector and host cells, with growing levels of endothelial cells and tumor remission.
Next, a simulation for extremely low anti-angiogenic side effects was run. Control weighting parameters were used, but the \(b_2\) parameter was changed to 0.005. The results are seen in Figure 4 below:

Figure 4. Numerical Optimal Control and Treatment Simulation Results – Low Anti-Angiogenic Side Effect Case

This is the first case in which we notice a strong presence of anti-angiogenic use. Note, however, that the weightage term used for the side effects of anti-angiogenic drugs is extreme, and not typical of most patients (unless additional medications are given to offset the side effects). Here, we see that it is recommended that both treatments are initially given at full potential. At \(\approx 300\) days, both treatments begin tapering off, with the anti-angiogenic treatment tapering off more slowly. By the day 800, the immunotherapy treatment has stopped, while the anti-angiogenic treatment is still prescribed at half of full capacity. Note that here, the effector and host cell populations are at healthy levels, though the endothelial cells are at zero, and there is a very small non-zero tumor value at 800 days.
Finally, a simulation was run for extremely high immunotherapy side effects. The same parameters as the control were used, but the \(b_1\) term was changed to 10. The results are shown in Figure 5:

Figure 5. Numerical Optimal Control and Treatment Simulation Results – High Immunotherapy Side Effect Case

This is also medically not typical unless the patient suffers from the rare side effects of pneumonitis, hepatitis, colitis, severe hormonal/gland problems, or severe brain inflammation such as neuropathy, meningitis, or encephalitis. In this case, we note that both drugs are given at very low frequencies (up to 4% of their potentials). The immunotherapy is given at about 0.035 for the first \(\approx\) 450 days, before tapering off to 0 by day 800. The anti-angiogenic drug steadily rises to a value of 0.04 by day 700, after which it plummets to 0 by the day 800. Note here that, though the host cell population is healthy, the cancerous cell are rapidly growing; the endothelial cells show more modest growth.

Thus, it can be concluded that, as seen in [1], immunotherapy is typically the most effective at causing tumor remission. In the cases that the optimal profile included anti-angiogenic drugs, we note that the cancer cell population was not completely eradicated. We also notice that anti-angiogenic drugs are only included in the optimal profile if the side effects of them are almost none, or if the side effects of immunotherapy are very high.
Therefore, as stated in [1], though it may be the case that the anti-angiogenic drugs can aid immunotherapy treatment plans, in most cases, this is not part of the optimal treatment profile.

6. Conclusion

In this paper, a detailed optimal control analysis was performed on the model developed in [1] on a combination immunotherapy-anti-angiogenic drug therapy treatment regimen. An analytic characterization of optimal control protocols was given, along with comments on the existence and uniqueness of such profiles. Numerical simulations were performed for a variety of patients, using different weighting terms in the objective functional. It was found that, except for the most extreme cases, the use of anti-angiogenic inhibitors was not justified. The author hopes that this work will help inspire further research into the consideration of more effective, synergetic combination therapies involving anti-angiogenic inhibitors and perhaps into the further development of more effective anti-angiogenic drugs.

Acknowledgments

The author wishes to express his profound gratitude to the reviewers for their useful comments on the manuscript.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Shi, X., He, X., & Ou, X. (2015, December). A mathematical model and analysis of the anti-angiogenic and tumor immunotherapy. In 2015 4th International Conference on Computer Science and Network Technology (ICCSNT) (Vol. 1, pp. 1549-1553). IEEE. [Google Scholor]
  2. Rajabi, M., & Mousa, S. A. (2017). The role of angiogenesis in cancer treatment. Biomedicines, 5(2), 34. [Google Scholor]
  3. Zetter, B. R. (1998). Angiogenesis and tumor metastasis. Annual review of medicine, 49(1), 407-424. [Google Scholor]
  4. Ma, J., & Waxman, D. J. (2008). Combination of antiangiogenesis with chemotherapy for more effective cancer treatment. Molecular cancer therapeutics, 7(12), 3670-3684. [Google Scholor]
  5. Fleming, W. H., & Rishel, R. W. (2012). Deterministic and stochastic optimal control (Vol. 1). Springer Science & Business Media. [Google Scholor]
  6. Lukes, D. L. (1982). Differential equations: classical to controlled. Elsevier.[Google Scholor]
  7. Burden, T. N., Ernstberger, J., & Fister, K. R. (2004). Optimal control applied to immunotherapy. Discrete and Continuous Dynamical Systems Series B, 4(1), 135-146. [Google Scholor]
  8. Fister, K. R., Lenhart, S. & McNally, J. S. (1998). Optimizing chemotherapy in an HIV model. Electronic Journal of Differential Equations, 1998(32), 1-12. [Google Scholor]
]]>
Existence and uniqueness of mild solution for stochastic partial differential equation with poisson jumps and delays https://old.pisrt.org/psr-press/journals/oms-vol-3-2019/existence-and-uniqueness-of-mild-solution-for-stochastic-partial-differential-equation-with-poisson-jumps-and-delays/ Sat, 16 Nov 2019 15:22:12 +0000 https://old.pisrt.org/?p=3457
OMS-Vol. 3 (2019), Issue 1, pp. 343 – 348 Open Access Full-Text PDF
Annamalai Anguraj, Ravi kumar
Abstract: The objective of this paper is to investigate the existence and uniqueness theorem for stochastic partial differential equations with poisson jumps and delays. The existence of mild solutions of the problem is studied by using a different resolvent operator defined in [1] and fixed point theorem.
]]>

Open Journal of Mathematical Sciences

Existence and uniqueness of mild solution for stochastic partial differential equation with poisson jumps and delays

Annamalai Anguraj\(^1\), Ravi Kumar
Department of Mathematics, PSG College of Arts and Science, Coimbatore 641 046, India.; (A.A & R.K)
\(^{1}\)Corresponding Author: angurajpsg@yahoo.com

Abstract

The objective of this paper is to investigate the existence and uniqueness theorem for stochastic partial differential equations with poisson jumps and delays. The existence of mild solutions of the problem is studied by using a different resolvent operator defined in [1] and fixed point theorem.

Keywords:

Resolvent operator, mild solution, stochastic partial differential equations, poisson jumps, delays.

1. Introduction

Stochastic differential equation is an emerging field drawing attention from both theoretical and applied disciplines, which has been successfully applied to problems in mechanical, electrical, physics, economics and several fields in engineering. For details see [2, 3] and the references therein. Recently a large number of interesting results of stochastic equations have been reported in [4, 5, 6, 7, 8]. Stochastic differential equations are used in the modeling of real life phenomena, where there is a need for an aspect of randomness (see [9, 10, 11]).

Furthermore, several practical systems (such as sudden price variations due to market crashes, earthquakes, hurricanes, epidemics, and so on) experiences some jump type stochastic perturbations. The sample paths are not being continuous, thus it is seize considering stochastic processes with jumps in describing the models. Generally, the jump models are derived from poisson random measure. The sample paths of systems being right continuous possess left limits. In the recent trend, researchers are focusing more on the theory and applications of impulsive stochastic functional differential equations with poisson jumps. Precisely, existence and stability results on impulsive stochastic functional differential equations with poisson jumps are found in [12, 13, 14, 15] and the references therein. Successively, few works have been reported in the study of stochastic differential equations with poisson jumps, refer to [13, 14, 16].

However, motivated by the above consideration, the aim of this paper is to establish the results on existence and uniqueness for stochastic differential equation with Poisson jumps and delay of the form:

\begin{eqnarray}\label{1.1} du(t)&=&[Au(t)+ f(t,u(t-\rho(t)))] dt+g(t,u(t-\delta(t))) dW(t)+\int_{Z} h(t,u(t-\sigma(t)),z) \tilde{N}(dt,dz),\nonumber\\ x_{0} &=& \xi \in D^{b}_{\mathcal{F}_{0}}([- \tau,o],H). \end{eqnarray}
(1)

The mappings \( f : \mathbb{R}_{+} \times D([- \tau , o]; H) \rightarrow H, g: \mathbb{R}_{+} \times D([- \tau , o]; H) \rightarrow \mathbb{L}^{0}_{2}(K,H), h: \mathbb{R}_{+} \times D([- \tau , o]; H)\times Z \rightarrow H\) are Borel measurable, \(\rho : \mathbb{R}_{+} \rightarrow [0, \tau], \delta : \mathbb{R}_{+} \rightarrow [0, \delta],\sigma : \mathbb{R}_{+} \rightarrow [0, \sigma] \) are continuous.

This paper is organized as follows: In Section 2, we give some basic definitions and results, which will be used in the sequel. In Section 3, the existence result for the system 1 is proved.

2. Preliminaries

Let \(\mathbb{H}\) and \(\mathbb{K}\) be a two real separable Hilbert space. Let \(\mathcal{L}(\mathbb{H}, \mathbb{K})\) denote the space of all bounded linear operators from \(\mathbb{H}\) into \(\mathbb{K}\), equipped with the usual operator norm \(\left\| . \right\|\) and we abbreviate this notation to \(\mathcal{L}(\mathbb{H})\) when \(\mathbb{H} = \mathbb{K}\). In this paper, we always use the same symbol \(\left\| . \right\|\) to denote norms of operators regardless of the spaces potentially involved when no confusion possibly arises. Let \((\Omega, \mathcal{F},{\mathcal{F}_{t \geq 0}}, \mathbb{P})\) be a complete probability space with a normal filtration \({\mathcal{F}_{t \geq 0}}\) satisfying the usual conditions (i.e. it is increasing and right continuous while \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\) null sets).

Let \(\left\{W({t}) : t \geq 0 \right\}\) denote a \(\mathbb{K}\)-valued Wiener process defined on the probability space \((\Omega, \mathcal{F},{\mathcal{F}_{t \geq 0}}, \mathbb{P})\), independent of poisson point process with covariance operator \(Q\); that is, \(E \left\langle W(t),x\right\rangle _{\mathbb{K}}\left\langle W(s),y\right\rangle _{\mathbb{K}} = (t \wedge s) \left\langle Qx,y \right\rangle _{\mathbb{K}}\), for all \(x, y \in \mathbb{K}\), where \(Q\) is a positive, self-adjoint, trace class operator on \(\mathbb{K}\). In particular, we denote \(W(t)\) a \(\mathbb{K}\)-valued \(Q\)-Wiener process with respect to \({\mathcal{F}_{t \geq 0}}\). To define stochastic integrals with respect to the \(Q\)-Wiener process \(W(t)\), we introduce the subspace \(\mathbb{K}_{0} = Q^{\frac{1}{2}} \mathbb{K}\) of \(\mathbb{K}\) endowed with the inner product: \(\)\left\langle u,v\right\rangle_{\mathbb{K}_ {0}} = \left\langle Q^{\frac{-1}{2}}u,Q^{\frac{-1}{2}}v\right\rangle_{\mathbb{K}} \(\) as a Hilbert space. We assumed that there exists a complete orthonormal system \({e_{i}}\) in \(\mathbb{K}\), a bounded sequence of positive real numbers \(\lambda_{i}\) such that \(Qe_{i} = \lambda_{i}e_{i}, i = 1,2,3,...\) and a sequence \({\beta_{i}(t)}_{i > 1}\) of independent standard Brownian motions such that \(W(t) = \sum \limits ^{+\infty}_{i=1} \sqrt{\lambda_{i}} \beta_{i}(t)e_{i}\) for \(t \geq 0\) and \(\mathcal{F}_{t} = \mathcal{F}^{w}_{t}\), where \(\mathcal{F}^{w}_{t}\) is the \(\sigma\)-algebra generated by \({W(s):0 \leq s \leq t}\).

Suppose \(\left\{p{(t)}, t \geq 0 \right\}\) is a \(\sigma\)-finite stationary \(\mathcal{F}_{t}\) adapted Poisson point process taking values in measurable space \(\left( U, \mathcal{B}{U} \right)\). The random measure \(N_{p}\) defind by \(N_{p}\left((o,t] \times \wedge \right) := \sum \limits _{s \in (0, t]}I_{\wedge}(P(s))\) for \(\wedge \in \mathcal{B}(U)\) is called the Poisson random measure induced by \(P(.)\), thus, we can define the measure \(N\) by \(N (dt, dy) = N_{p}(dt,dy) - v(dy) dt\), where \(y\) is the characteristic measure of \(N_{p}\), which is called the compensated Poisson random measure. For a main source for the material on Poisson process and random measure we refer to [1]KJ93}. For a Borel set \(Z \in B_{\sigma}(\mathbb{H}- \left\{0 \right\})\), we denote by \(\mathcal{P}^{2} ([0, T] \times Z; \mathbb{H})\) the space of all predicable mapping \(H: [0,T] \times Z \times \Omega \rightarrow \mathbb{H}\) for which \( \int_{0}^{t} \int_{Z} E \left\| H(t,v) \right\|^{2} dt \lambda (dz) < \infty \). Then one can define the \(\mathbb{H}\)- valued stochastic integral \(\) \int^{t}_{0} \int_{Z} H (t, v) \bar{N}(dt, dz),\(\) which is a centered square integrable martingale [1]LL08}. We always assume that \(W(t)\) and \(\bar{N}\) are independent of \(\mathcal{F}_{0}\). Also \(S =C([0, a]; X)\) denotes the space of all continuous functions with the norm \(\left\|.\right\|_{C([0,a];X)}=sup_{t\in[0,a]} \left\|x(t)\right\|_{X}\).

Consider the following Stochastic partial differential equation driven by Poisson jumps with delays: \begin{eqnarray*} du(t)&=&[Au(t)+ f(t,u(t-\rho(t)))] dt+g(t,u(t-\delta(t))) dW(t)+\int_{Z} h(t,u(t-\sigma(t)),z) \tilde{N}(dt,dz). \end{eqnarray*} The above equation is equivalent to the following integral equation:
\begin{eqnarray}\label{2.1} u(t) &=& \xi(0) + \int^{t}_{0}Au(t)ds +\int^{t}_{0}f(s,u(s-\rho(s)))ds+\int^{t}_{0}g(s,u(s-\delta(s)))dW(s)\nonumber\\ &&+ \int^{t}_{0}\int_{Z}h(s,u(s-\sigma(s)),z)\tilde{N}(ds,dz). \end{eqnarray}
(2)
This can be written the following form:
\begin{eqnarray}\label{2.2} u(t) = f(t) + \int ^{t}_{0} s' (t -s) f(s) ds \end{eqnarray}
(3)
where, \begin{eqnarray*} f(t) &=& \xi (0) + \int^{t}_{0}f(s,u(s-\rho(s)))ds +\int^{t}_{0}g(s,u(s-\delta(s)))dW(s)\\&&+\int^{t}_{0}\int_{Z}h(s,u(s-\sigma(s)),z)\tilde{N}(ds,dz). \end{eqnarray*} Let us assume that the integral equation (3) has an associated resolvent operator \(\left\{S(t)\right\}_{t\geq 0}\) on \(H\).

Definition 1. [1] A family \((S(t))_{t \geq 0} \subset \mathcal{L}(X)\) of bounded linear operators in \(X\) is called resolvent for (4)(or solution operator for (4), if the following conditions are satisfied

  • [(S1)] \(S(t)\) is strongly continuous on \(\mathbb{R}^{+}\) and \(S(0)=I\),
  • [(S2)] \(S(t)\) commutes with \(A\), which means that \(S(t)\mathcal{D}(A) \subset \mathcal{D}(A)\) and \(AS(t)x = S(t)Ax\) for all \(x \in \mathcal{D}(A)\) and \(t \geq 0\);
  • [(S3)] The resolvent equation holds \begin{align*} S(t)x = x+ \int^{t}_{0} As(s)xds. \end{align*}

Definition 2. A resolvent \(S(t)\) for (3) is called differentiable, if \(S(\cdot)x \in W^{1,1}(\mathbb{R^{+}};X)\) for each \(x \in \mathcal{D}(A)\) and there is \(\phi_{A} \in {L}^{1}_{loc}(\mathbb{R^{+}})\) such that \(\|S'(t)x\| \leq \phi_{A}(t)\|x\|_{\mathcal{D}(A)}\) a.e. on \(\mathbb{R^{+}}\), for each \(x \in \mathcal{D}(A)\), Where the notation [D(A)] stands the domain of the operator \(A\) provided with the graph norm \(\left\| x \right\| _{[D(A)]} = \left\|x \right\| + \left\|Ax \right\|\).

Lemma 3.[1] Suppose (3) admits a differentiable resolvent \(S(t)\) and if \(f \in C([0,a];\mathcal{D}(A))\), then \begin{eqnarray*} u(t)=f(t)+\int_{0}^{t}S'(t-s)f(s)ds,~t \in [0,a] \end{eqnarray*} is a mild solution of (3).

In order to prove that the existence result of the stochastic partial differential equation with poisson jumps and delays, we need the following assumptions:
  • [(H1)] The mapping \(f(t,.)\) and \(g(,.)\) satisfies the following Lipschitz and linear growth conditions, for any \(x, y \in \mathbb{H}\) and \( t \geq 0\) \begin{eqnarray*} \left\| f(t,x) - f(t,y)\right\| _{\mathbb{H}} \leq L_{1} \left\|x - y \right\| _{\mathbb{H}}~ \forall t \geq 0, x, y \in \mathbb{H} ~~\text{where} L_{1} > 0,\\ \left\| g(t,x) - g(t,y)\right\| _{\mathbb{H}} \leq L_{1} \left\|x - y \right\| _{\mathbb{H}}~\forall t \geq 0, x, y \in \mathbb{H} ~~\text{where} L_{2} > 0. \end{eqnarray*}
  • [(H2)] The mapping \(h(t, .)\) satisfy global Lipschitz conditions, for any \(x, y \in \mathbb{H}\) and \( t \geq 0\) \begin{eqnarray*} \int _{Z} \left\| h(t, x, z) - h(t, y, z)\right\|^{2} v (dz) \leq L_{3}^{2} \left\| x - y \right\| ^{2}, L_{3} > 0. \end{eqnarray*}

3. Existence and uniqueness results

In this section, we provide the existence results for (1), this problem is equivalent to the following integral equation: \begin{eqnarray*} u(t) &=& \xi(0) + \int^{t}_{0}Au(t)ds +\int^{t}_{0}f(s,u(s-\rho(s)))ds+\int^{t}_{0}g(s,u(s-\delta(s)))dW(s)\\&&+ \int^{t}_{0}\int_{Z}h(s,u(s-\sigma(s)),z)\tilde{N}(ds,dz). \end{eqnarray*} By Lemma 2 and the above representation, the mild solution of (1) can be defined as follows:

Definition 4. A stochastic process \(\left\{ u(t), t \in [0,T]\right\}\), \(0\leq T\leq \infty\), is c (1) if

  • \(u(t)\) is adapted to \(\mathcal{F}_{t}\), \(t\geq 0\);
  • \(u(t)\in \mathbb{H}\) has cadlag paths on \(t \in [0, T]\) almost surely, and for arbitrary \(0\leq t\leq T\),
\begin{eqnarray*} u(t) & =& \xi(0)+ \int^{t}_{0}f[s,u(s-\rho(s))] ds+ \int^{t}_{0}g[s,u(s-\delta(s))] dW(s)\\&&+ \int^{t}_{0} \int_{Z}h[s,u(s-\sigma(s)),z] \tilde{N}(ds dz) + \int^{t}_{0}s'(t-s) \xi(0) ds+ \int^{t}_{0}s'(t-s) \int^{s}_{0} f[\tau,x(\tau-\rho(\tau))] d\tau ds\\ &&+ \int^{t}_{0} s' (t- s) \int^{s}_{0} g[\tau, u(\tau - \delta(\tau))] dW(\tau)ds + \int^{t}_{0}s' (t-s) \int^{s}_{0} \int_{Z} h[\tau, u (\tau- \sigma(\tau)),z] \tilde{N} (ds dz) ds. \end{eqnarray*}

Theorem 5. Assume that \((H1)\) and \((H2)\)are hold, then the problem (1) is a unique mild solution.

Proof Define the operator \(\Phi : S \rightarrow S \) by \(\Phi(u)(t) = \xi (t)\) for \(t \in [-\tau, 0]\) and for \(t \geq 0\), defined by \begin{eqnarray*} \Phi(u)(t) &=& \xi(0)+ \int^{t}_{0}f[s,u(s-\rho(s))] ds+ \int^{t}_{0}g[s,u(s-\delta(s))] dW(s)\\&&+ \int^{t}_{0} \int_{Z}h[s,u(s-\sigma(s)),z] \tilde{N}(ds dz) + \int^{t}_{0}s'(t-s) \xi(0) ds\\&&+ \int^{t}_{0}s'(t-s) \int^{s}_{0} f[\tau,x(\tau-\rho(\tau))] d\tau ds+ \int^{t}_{0} s' (t- s) \int^{s}_{0} g[\tau, u(\tau - \delta(\tau))] dw(\tau)ds\\&&+ \int^{t}_{0}s' (t-s) \int^{s}_{0} \int_{Z} h[\tau, u (\tau- \sigma(\tau)),z] \tilde{N} (ds dz) ds. \end{eqnarray*} First, we verify that \(\Phi\) is p-th mean continuous on\([0, \infty)\). Let \(u \in S , t_{1} \geq 0\) and \(\left| h \right|\) be sufficiently small, then \begin{eqnarray*} E \left\|(\Phi u) (t_{1} + h) - (\Phi u)(t_{1}) \right\|^{P}_{H} \leq 7^{p-1} \sum \limits ^{7}_{i= 1} E \left\| I_{i}(t_{1}+ h) - I_{i}(t_{1}) \right\|^{P}_{H}. \end{eqnarray*} By using H\(\ddot{\text{o}}\)lders inequalities and the Burkholder-Davies-Gundy inequality, we have \begin{eqnarray*} E \left\|I_{2} (t_{1}+ h) - I_{2}(t_{1}) \right\|^{P}_{H}&\leq& E \left\| \int^{t_{1}+ h}_{0} g[s, u(s- \delta(s))] dW(s) - \int^{t_{1}}_{0} g[s, u(s- \delta(s))] dW(s) \right\|^{P}_{H}\\ &\leq& E \left\| \int^{t_{1}+{h}}_{t_{1}} g [s, u(s- \delta (s))] dw(s)\right\| ^{P}_{H}\\ &\leq& c_{p} E\left( \left( \int ^{t_{1}+ {h}}_ {t_{1}} \left\| g[s, u(s- \delta(s))] \right\| ^{P}_{H}\right)^{2/p} ds \right)^{p/2}\\ &\leq& c_{p} \left( \int ^{t_{1}+ {h}}_ {t_{1}} E \left\| g [s, u(s - \delta(s))] \right\|^{P}_{H} ds \right)\rightarrow 0~~\text{ as}~~ h\rightarrow 0. \end{eqnarray*} Next \begin{eqnarray*} E \left\| I_{7}(t_{1}+ h) - I_{7}(t_{1}) \right\| ^{P}_{H}& \leq& 2^{p-1 }E \big\| \int ^{t_{1}}_{0} s' (t_{1}+ h - s) - s'(t_{1}- s) \left[ \int^{s}_{0} \int_{Z} h [\tau ,u (\tau - \sigma (\tau)),z] \bar{N} (d \tau dz) \right] ds \big\| ^{P}_{H}\\ &&+2^{p-1 }E \left\| \int^{t_{1}+ h} _{t_{1}} s' (t_{1} + h - s) \int ^{s}_{0} \int _{Z} h[\tau, u (\tau- \sigma (\tau)), z] \bar{N}(d \tau dz) ds\right\| ^{P}_{H}\\ &&\rightarrow 0~~\text{ as}~~ h\rightarrow 0. \end{eqnarray*} Similarly, we can verify that: \begin{eqnarray} E\left\|I_{i}(t_{1}+h) - I_{i}(t_{1})\right\|^{2}_{H} \rightarrow 0, ~~~~i = 1,3,4,5,6 ~~\text{as}~~ h\rightarrow 0, \end{eqnarray} where \(c_{p} = (p(p-1)/2)^{p/2}\). Thus \(\Phi\) is indeed continuous in pth mean on \([0, \infty)\). Next, we show that \(\Phi (S) \subset S\). It follows from (1), then we have \begin{eqnarray*} E\left\|(\Phi x)(t)\right\|^{P}_{H}&\leq& 8^{P-1}E\left\|\xi(0)\right\|^{P}_{H}+8^{P-1}E\left\|\int^{t}_{0}f[s, u(s-\rho(s))] ds \right\|\\ &&+ 8^{P-1}E \left\| \int ^{t}_{0} g[s, u (s - \delta (s))] dW(S) \right\| ^{P}_{H} + 8^{P-1}E \left\| \int ^{t}_{0} \int _{Z} h[s , u (s - \sigma (s)), z] \bar{N} (ds dz) \right\| ^{P}_{H}\\ &&+ 8^{P-1}E \left\| \int ^{t}_{0} s' (t-s) \xi (0) ds \right\| ^{P}_{H} + 8^{P-1}E \left\| \int ^{t}_{0} s' (t-s) \int ^{s}_{0} f[\tau, u (\tau- \rho (\tau))] d\tau ds \right\| ^{P}_{H}\\ &&+ 8^{P-1}E \left\| \int ^{t}_{0} s' (t-s) \int ^{s}_{0} g[\tau, u (\tau- \delta (\tau))] dW(\tau) ds \right\| ^{P}_{H}\\ &&+ 8^{P-1}E \left\| \int ^{t}_{0} s' (t-s) \int ^{s}_{0} \int _{Z} h[\tau, u (\tau- \sigma (\tau)), z] \bar{N} (d\tau dz) ds \right\| ^{P}_{H}= \sum \limits ^{8} _{i = 1} J_{i} (t). \end{eqnarray*} Now we estimate \(J_{i}, i = 1, 2, \ldots, 8\), then we have \begin{eqnarray*} J_{1}(t)\leq \left\| \xi \right\| ^{P}_{D} < \infty. \end{eqnarray*} Now by (H1), we obtain \begin{align*} J_{2}(t) &\leq E \left[ \int^{t}_{0} \left\| f[s, u (s- \rho(s))] \right\| _{H} ds \right] ^{P}\leq L^{p}_{1} \left\| u \right\|^{p}_{D} T. \end{align*} Now, from the Lemma (Da Prato and Zabczyk [4]) and by the (H1), we have \begin{align*} J_{3}(t)& \leq c_{p} \left[ \int^{t}_{0}\left( E\left\|g[s, u(s-\delta(s))]\right\|^{P}_{H}\right)^{\frac{2}{p}} ds \right]^{\frac{p}{2}} \leq c_{p} L_{2}^{P} \left\| u \right\| ^{P}_{D} T. \end{align*} Similarly, by (H2), we obtain \begin{equation*} J_{4}(t) \leq c_{p} \left[ E [\int^{t}_{0} \int _{h} \left\| h(s, u (s - \sigma (s))), z] \right\|^{2}_{H} dsv (dz) \right] ^{\frac{p}{2}}\leq c_{p} L_{3}^{p} \left[ \int ^{t}_{0} E \left\| u(s - \sigma(s)) \right\|^{2}_{H} ds \right]^{\frac{p}{2}} \leq c_{p} L_{3}^{p} \left\| u \right\| ^{P}_{D} T \end{equation*} By (H1), (H2) and Well known Lemma (Da Prato and Zabczyk [4]), we have \begin{align*} J_{5}(t)& \leq \left\| \xi (0) \right\| \left\| \phi_{A}\right\|_{{L^{1}}([0,t]; \mathbb{R}^{+})} \end{align*} \begin{align*} J_{6}(t) &\leq L^{P}_{1} \int^{t}_{0} \phi _{A} (t - s) \int ^{t} _{0} E \left\| u(\tau - \rho (\tau)) d \tau \right\|^{p}_{H} ds \leq \left\| \xi (0) \right\| \left\| \phi_{A}\right\|_{{L^{1}}([0,t]; \mathbb{R}^{+})} \end{align*} \begin{align*} J_{7}(t)& \leq c_{p} L^{p}_{2} \int^{t}_{0} \phi_{A} (t - s) \int ^{s}_{0} E \left\| u (\tau - \delta (\tau)) \right\| ^{P}_{H} d\tau ds\leq c_{p} L_{2}^{p}\left\| u \right\|^{p}_{D} T \left\| \phi_{A}\right\|_{{L^{1}}([0,t]; \mathbb{R}^{+})} \end{align*} \begin{align*} J_{8}(t)& \leq c_{p} E\left[ \int ^{t}_{0} \phi_{A} (t-s) \int ^{s}_{0} \int _{Z} \left\| h[\tau , u(\tau - \sigma (\tau)),z] \right\|^{2} v (dz)ds \right] ^{\frac{p}{2}} \leq c_{p} L_{3}^{p}\left\| u \right\|^{p}_{D} T \left\| \phi_{A}\right\|_{{L^{1}}([0,t]; \mathbb{R}^{+})}. \end{align*} From the above estimations, we have \(\left\|(\Phi x)(t)\right\| < \infty \). So we conclude that \(\Phi (S) \subset (S)\). Next, we need to show that \(\Phi\) is contraction mapping. Let \( u , v \in S\), then we have \begin{align*} &E \sup \limits _{t \in [0,T]} \left\| (\Phi u)(t) - (\Phi v)(t) \right\|\\&\leq 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \left\| \int ^{t}_{0} \left(f[s,u(s - \rho (s))]- f[s,v(s - \rho (s))]\right) ds \right\|^{P}_{H}\\&\,\,\,\,+ 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \left\| \int ^{t}_{0} \left(g[s,u(s - \delta (s))] - g[s,v(s - \delta (s))]\right) dW(s) \right\|^{P}_{H}\\ &\,\,\,\,+ 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \left\| \int ^{t}_{0} \int_{Z}\left(h[s,u(s - \sigma (s)), z] - h[s,v(s - \sigma (s)),z]\right)\tilde{N} (ds, dz) \right\|^{P}_{H} \\ &\,\,\,\,+ 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \left\| \int ^{t}_{0} s' (t- s) \left[\int ^{s}_{0} \left(f[\tau , u (\tau - \rho (\tau))] - f[\tau , v (\tau - \rho (\tau))]\right)d \tau \right]ds\right\| ^{P}_{H}\\ & \,\,\,\,+ 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \left\| \int ^{t}_{0} s' (t- s) \left[\int ^{s}_{0} \left(g[\tau , u (\tau - \delta (\tau))] -g[\tau , v (\tau - \delta (\tau))]\right) dW (\tau) \right]ds\right\| ^{P}_{H}\\ &\,\,\,\,+ 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \biggl\| \int ^{t}_{0} s'(t - s)\biggl[\int ^{s}_{0} \int_{z}\left(h[\tau, u (\tau- \sigma(\tau)), z]- h[\tau, v (\tau- \sigma(\tau)), z]\right) \tilde{N} (d \tau dz)\biggr] ds \biggr \| ^{P}_{H}\\ &\leq 6 ^{p- 1} \sup \limits _{t \in [0, T]} E \left\| u(t) - v(t) \right\| ^{P}_{H} T \times \left(L_{1}^{P}++ c_{p} L_{2}^{P}+L_{3} ^{P} \right)\left[1 + \left\| \phi _{A}\right\| _{L^{1}[0,T]; \mathbb{R}^{+}} \right]. \end{align*} If \(T> 0\) is sufficiently small, then we can ensure that \begin{eqnarray*} \left(L_{1}^{P}++ c_{p} L_{2}^{P}+L_{3} ^{P} \right)\left[1 + \left\| \phi _{A}\right\| _{L^{1}[0,T]; \mathbb{R}^{+}} \right]< 1. \end{eqnarray*} We conclude that the operator \(\Phi\) satisfies the contraction mapping principle and hence there exists a unique mild solution of (1) on \(T \in [0,T]\).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Competing Interests

The author(s) do not have any competing interests in the manuscript.

References

  1. Präss, J. (2013). Evolutionary integral equations and applications(Vol. 87). Birkhäuser Verlag, Basel. [Google Scholor]
  2. Karatzas,I., & Shreve, S. E. (1991). Brownian Motion and Stochastic calculus. Springer- Verlag, Berlin.[Google Scholor]
  3. Mao, X. (2007). Stochastic differential equations and applications. Ellis Horwood, Chichester, UK.[Google Scholor]
  4. Da Prato, G., & Zabczyk, J. (2014). Stochastic equations in infinite dimensions. Cambridge university press.[Google Scholor]
  5. Hale, J. K., & Lunel, S. M. V. (2013). Introduction to functional differential equations (Vol. 99). Springer Science \& Business Media.[Google Scholor]
  6. Taniguchi, T., Liu, K., & Truman, A. (2002). Existence, uniqueness, and asymptotic behavior of mild solutions to stochastic functional differential equations in Hilbert spaces. Journal of Differential Equations, 181(1), 72-91. [Google Scholor]
  7. Xu, D., Yang, Z., & Huang, Y. (2008). Existence–uniqueness and continuation theorems for stochastic functional differential equations. Journal of Differential Equations, 245(6), 1681-1703.[Google Scholor]
  8. Ren, Y., & Chen, L. (2009). A note on the neutral stochastic functional differential equation with infinite delay and Poisson jumps in an abstract space. Journal of Mathematical Physics, 50(8), 082704.[Google Scholor]
  9. Balasubramaniam, P., & Ntouyas, S. K. (2006). Controllability for neutral stochastic functional differential inclusions with infinite delay in abstract space. Journal of Mathematical Analysis and Applications, 324(1), 161-176.[Google Scholor]
  10. Mokkedem, F. Z., & Fu, X. (2017). Approximate Controllability for a Semilinear Stochastic Evolution System with Infinite Delay in \(L_p\) Space. Applied Mathematics & Optimization, 75(2), 253-283.[Google Scholor]
  11. Shukla, A., Arora, U., & Sukavanam, N. (2015). Approximate controllability of retarded semilinear stochastic system with non local conditions. Journal of Applied Mathematics and Computing, 49(1-2), 513-527. [Google Scholor]
  12. Cui, J., Yan, L., & Sun, X. (2011). Exponential stability for neutral stochastic partial differential equations with delays and Poisson jumps. Statistics & Probability Letters, 81(12), 1970-1977.[Google Scholor]
  13. Annamalai, A., Kandasamy, B., Baleanu, D., & Arumugam, V. (2018). On neutral impulsive stochastic differential equations with Poisson jumps. Advances in Difference Equations, 2018(1), 290.[Google Scholor]
  14. Boufoussi, B., & Hajji, S. (2010). Successive approximation of neutral functional stochastic differential equations with jumps. Statistics & probability letters, 80(5-6), 324-332. [Google Scholor]
  15. Cui, J., & Yan, L. (2012). Successive approximation of neutral stochastic evolution equations with infinite delay and Poisson jumps. Applied Mathematics and Computation, 218(12), 6776-6784.[Google Scholor]
  16. Diop, M. A., & Zene, M. M. (2016). On the asymptotic stability of impulsive neutral stochastic partial integrodifferential equations with variable delays and Poisson jumps. Afrika Matematika, 27(1-2), 215-228.[Google Scholor]
  17. J.C.F. Kingman (1993). Poisson Processes. Oxford University Press.[Google Scholor]
  18. Luo, J., & Liu, K. (2008). Stability of infinite dimensional stochastic evolution equations with memory and Markovian jumps. Stochastic Processes and their Applications, 118(5), 864-895.[Google Scholor]
]]>