OMA – Vol 6 – Issue 2 (2022) – PISRT https://old.pisrt.org Sat, 30 Dec 2023 06:13:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 Further on quantum-plank derivatives and integrals in composite forms https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/further-on-quantum-plank-derivatives-and-integrals-in-composite-forms/ Sat, 31 Dec 2022 19:04:50 +0000 https://old.pisrt.org/?p=6951
OMA-Vol. 6 (2022), Issue 2, pp. 130 - 138 Open Access Full-Text PDF
Ghulam Farid and Zeeshan Afzal
Abstract:In quantum-plank calculus, \(q\)-derivatives and \(h\)-derivatives are fundamental factors. Recently, a composite form of both derivatives is introduced and called \(q-h\)-derivative. This paper aims to present a further generalized notion of derivatives will be called \((q,p-h)\)-derivatives. This will produce \(q\)-derivative, \(h\)-derivative, \(q-h\)-derivative and \((p,q)\)-derivative. Theory based on all aforementioned derivatives can be generalized via this new notion. It is expected, this paper will be useful and beneficial for researchers working in diverse fields of sciences and engineering. ]]>

Open Journal of Mathematical Analysis

Further on quantum-plank derivatives and integrals in composite forms

Ghulam Farid\(^{1,*}\) and Zeeshan Afzal\(^2\)
\(^1\) Department of Mathematics, COMSATS University of Islamabad, Attock Campus, Attock, Pakistan.
\(^2\) Department of Mathematics and Statistics, The University of Lahore, Lahore, Pakistan.
Correspondence should be addressed to Ghulam Farid at ghlmfarid@ciit-attock.edu.pk

Abstract

In quantum-plank calculus, \(q\)-derivatives and \(h\)-derivatives are fundamental factors. Recently, a composite form of both derivatives is introduced and called \(q-h\)-derivative. This paper aims to present a further generalized notion of derivatives will be called \((q,p-h)\)-derivatives. This will produce \(q\)-derivative, \(h\)-derivative, \(q-h\)-derivative and \((p,q)\)-derivative. Theory based on all aforementioned derivatives can be generalized via this new notion. It is expected, this paper will be useful and beneficial for researchers working in diverse fields of sciences and engineering.

Keywords:

\(q\)-derivative; \(q\)-integral; \(h\)-derivative; \(h\)-integral; \(q-h\)-derivative; \(q-h\)-integral; inequalities.

1. Introduction

The \(h\)-derivative, \(q\)-derivative, \(q-h\)-derivative and \((p,q)\)-derivative are given by the following notions and quotients:
  • the \(h\)-derivative of \(g\): \(D_h g(t)=\frac{d_h g(t)}{d_h t}=\frac{g(t+h)-g(t)}{h}\),
  • the \(q\)-derivative of \(g\): \(D_q g(t)=\frac{d_q g(t)}{d_q t}=\frac{g(qt)-g(t)}{(q-1)t}\),
  • the \(q-h\)-derivative of \(g\): \(C_{h}D_qf(t)=\frac{_{h}d_qf(t)}{_{h}d_qt}=\frac{g(q(t+h))-g(t)}{(q-1)t+qh}\),
  • the \((p,q)\)-derivative of \(g\): \(D^p_q g(t)=\frac{d^p_q g(t)}{d^p_q t}=\frac{g(qt)-g(pt)}{(q-p)t}\),
respectively. The equations; \(d_h g(t)=g(t+h)-g(t)\), \(d_q g(t)=g(qt)-g(t)\), \(_{h}d_q g(t)=g(q(t+h))-g(t)\) and \(d^p_q g(t)=g(qt)-g(pt)\) provide \(h\)-differential, \(q\)-differential, \(q-h\)-differential and \((p,q)\)-differential for the function \(g\) respectively. As an example \(h\)-derivative, \(q\)-derivative, \(q-h\)-derivative and \((p,q)\)-derivative of \(t^{n}\) can be computed in the forms \(\frac{(t+h)^n -t^n}{h}=n t^{n-1}+\frac{n(n-1)}{2}t^{n-2}h+...+h^{n-1}\), \(\frac{q^n-1}{q-1}t^{n-1}=(q^{n-1}+...+1)t^{n-1}\), \(\frac{q^n(t+h)^n-p^nt^n}{(q-p)t+qh}\) and \(\frac{(q^n-p^n)t^n}{(q-p)t}=(q^{n-1}+...+p^{n-1})t^{n-1}\) respectively. For the sake of simplicity the notations \([n]_q\) and \([n]_{q,p}\) are used instead of \(\frac{q^n-1}{q-1}\) and \(\frac{q^n-p^n}{q-p}\). Then \(D_q t^{n}=[n]_{q}t^{n-1}\) and \(D^p_q t^{n}=[n]_{q,p}t^{n-1}\). Since \(\lim\limits_{q\to1}D_q g(t)=\lim\limits_{h\to0}D_h g(t)=\frac{dg(t)}{dt}\), \(h\)-derivative, \(q\)-derivative, \(q-h\)-derivative and \((q,p)\)-derivative are generalized notions of ordinary derivative provided that \(g\) is differentiable function, therefore, these notions of derivatives are used to generalize the theory based on ordinary derivatives. Especially, the \(q\)-derivative leads to the subject of \(q\)-calculus, for detailed study one can see [8]. In the following we give rules of the \(q\)-derivative and the \(h\)-derivative as follows: The formulae of \(q\)-derivative of sum and product of two functions \(g_1\) and \(g_2\) are given by;
\begin{equation}\label{f001} D_q\{g_1(t)+g_2(t)\}=D_q g_1(t)+D_qg_2(t)\,, \end{equation}
(1)
and
\begin{equation}\label{f1} D_q\{g_1(t)g_2(t)\}=g_1(qt)D_qg_2(t)+g_2(t)D_qg_1(t)\,, \end{equation}
(2)
respectively. Since \(g_1(t)g_2(t)=g_2(t)g_1(t)\), the above formula is equivalent to the following one
\begin{equation}\label{f2} D_q\{g_1(t)g_2(t)\}=g_1(t)D_qg_2(t)+g_2(qt)D_qg_1(t). \end{equation}
(3)
Using (2), the \(q\)-derivative of quotient of two functions \(g_1\) and \(g_2\) is given by the formula
\begin{equation} D_{q}\left(\frac{g_{1}(t)}{g_{2}(t)}\right)=\frac{g_{2}(t)D_{q}g_{1}(t)-g_{1}(t)D_{q}g_{2}(t)}{g_{2}(t)g_{2}(qt)}. \end{equation}
(4)
While, by using (3), the \(q\)-derivative of quotient of two functions \(g_1\) and \(g_2\) is given by the formula
\begin{equation} D_q\left(\frac{g_1(t)}{g_2(t)}\right)=\frac{g_2(qt)D_qg_1(t)-g_1(qt)D_qg_2(t)}{g_2(t)g_2(qt)}. \end{equation}
(5)
The formulae of \(h\)-derivative of sum and product of two functions \(g_1\) and \(g_2\) are given by;
\begin{equation}\label{f4} D_h\{g_1(t)+g_2(t)\}=D_hg_1(t)+D_hg_2(t)\,, \end{equation}
(6)
and
\begin{equation}\label{f004} D_h\{g_1(t)g_2(t)\}=g_1(t)D_hg_2(t)+g_2(t+h)D_hg_1(t), \end{equation}
(7)
respectively. The \(h\)-derivative of quotient of two functions \(g_1\) and \(g_2\) is given by the formula
\begin{equation}\label{f5} D_h\left(\frac{g_1(t)}{g_2(t)}\right)=\frac{g_2(t)D_hg_1(t)-g_1(t)D_hg_2(t)}{g_2(t)g_2(t+h)}. \end{equation}
(8)
The above \(h\)-derivative and \(q\)-derivative formulas are unified in the following \(q-h\)-derivative formulas: The \(q-h\)-derivative is linear, i.e., the following equation holds: \begin{equation*} C_{h}D_q(\alpha f(t)+\beta g(t))=\alpha\, C_{h}D_qf(t)+\beta\, C_{h}D_qg(t). \end{equation*} The \(q-h\)-derivative of product of two functions is given by the following equation:
\begin{align}\label{6} C_{h}D_q(f(t)g(t))=\frac{_{h}d_q(f(t)g(t))}{_{h}d_qx}=f(q(t+h)) C_{h}D_qg(t)+g(t) C_{h}D_qf(t). \end{align}
(9)
The \(q-h\)-derivative of quotient of two functions is given by the following equation:
\begin{equation} C_{h}D_q\bigg(\frac{f(t)}{g(t)}\bigg)=\frac{C_{h}D_q(f(t))g(q(t+h))-f(q(t+h))C_{h}D_q(g(t))}{g(q(t+h))g(t)}. \end{equation}
(10)
One can note, the \(q-h\)-derivative formulas generate both \(q\)-derivative and \(h\)-derivative formulas. Next, we give the definitions of \(q\)-derivative, \((p,q)\)-derivative and \(q-h\)-derivative on a finite interval.

Definition 1.[2] Let \(0< q< 1\). For a continuous function \(f:I=[a,b]\to \mathbb{R}\) the \(q\)-derivative on \(I\) denoted by \(_{a}D_q f\) is defined by

\begin{align} _{a}D_q f(x):=\frac{f(qx+(1-q)a)-f(x)}{(q-1)(x-a)},\,\,x\neq a,\,\, _{a}D_q f(a)=\lim\limits_{x\to a} {_{a}}D_q f(x). \end{align}
(11)

Definition 2.[3, 4] Let \(0< q< p\leq1\). For a continuous function \(f:I=[a,b]\to \mathbb{R}\) the \((p,q)\)-derivative on \(I\) denoted by \(_{a}D_{p,q} f\) is defined by

\begin{align} &\nonumber _{a}D_{p,q} f(x):=\frac{f(qx+(1-q)a)-f(px+(1-p)a)}{(q-p)(x-a)},\,\,x\neq a,\tag{12}\\ &\nonumber _{a}D_{p,q} f(a)=\lim\limits_{x\to a} {_{a}}D_{p,q} f(x).\tag{13} \end{align}

Definition 3.[5] Let \(0< q< 1\), \(h\in\mathbb{R}\) and \(x\in I\). For a continuous function \(f:I\to \mathbb{R}\) the left and right \(q-h-\)derivatives on \(I\) denoted by \(C_{h}D^{a^+}_q f\) and \(C_{h}D^{b_-}_q f\) are defined with the following equations respectively;

\begin{align} &C_{h}D^{a^+}_q f(x):=\frac{f((1-q)a+q(x+h))-f(x)}{(1-q)(a-x)+qh};\,\,x\neq\frac{qh+(1-q)a}{1-q}:=u,\label{3.1}\\&C_{h}D^{b_-}_q f(x):=\frac{f((1-q)x+q(b+h))-f(b)}{(1-q)(x-b)+qh};\,\,x\neq\frac{-qh+(1-q)b}{1-q}:=v\label{3.01}, \end{align}
(14)
provided that \((1-q)a+q(x+h)\in[a,x]\) and \((1-q)x+q(b+h)\in[x,b]\). Also, \(C_{h}D^{a^+}_q f(u)=\lim\limits_{x\to u}C_{h}D^{a^+}_q f(x)\) and \(C_{h}D^{b_-}_q f(v)=\lim\limits_{x\to v}C_{h}D^{b_-}_q f(x)\).

The definitions of \(q\)-integral, \((p,q)\)-integral and \(q-h\)-integral of function \(f\) on interval \([a,b]\) are given as follows:

Definition 4.[2] Let \(0< q< 1\) and function \(f:I=[a,b]\to \mathbb{R}\). The \(q\)-definite integral on \(I\) is defined by the following formula:

\begin{align} \int_{a}^x f(t)_{a}d_q t=(1-q)(x-a)\sum_{n=0}^{\infty}{q^nf(q^nx+(1-q^n)a)},\,x\in[a,b]. \end{align}
(15)

Definition 5.[3] Let \(0< q< p\leq1\) and function \(f:I=[a,b]\to \mathbb{R}\). The \((p,q)\)-definite integral on \(I\) is defined by the following formula:

\begin{align}\label{i1} \int_{a}^x f(t)\, _{a}d^p_q t=(p-q)(x-a)\sum_{n=0}^{\infty}{\frac{q^n}{p^{n+1}}f\left(\frac{q^n}{p^{n+1}}b+\left(1-\frac{q^n}{p^{n+1}}\right)a\right)},\,x\in[a,b]. \end{align}
(16)

Definition 6.[5] Let \(0< q< 1\) and \(f:I=[a,b]\to \mathbb{R}\) be a continuous function. Then the left and right \(q-h\)-integrals on \(I\) denoted by \(I^{a+}_{q-h} f\) and \(I^{b}_{q-h} f\) are defined as follows:

\begin{align}\label{i001} &I^{a+}_{q-h} f(x):=\int_{a}^x f(t)\, _{h}d_q t=((1-q)(x-a)+qh)\sum_{n=0}^{\infty}{q^nf(q^na+(1-q^n)x+nq^n h)},\,x> a,\\& I^{b-}_{q-h} f(x):= \int_{x}^b f(t)\, _{h}d_q t=((1-q)(b-x)+qh)\sum_{n=0}^{\infty}{q^nf(q^nx+(1-q^n)b+nq^n h)},\,x< b. \end{align}
(17)
In (15), if \(a=0\), then the Jackson \(q\)-definite integral on \([0,x]\) is obtained as follows [8]:
\begin{align}\label{i2} \int_{0}^x f(t)\, _{0}d_q t=\int_{0}^x f(t)d_q t=(1-q)x\sum_{n=0}^{\infty}{q^nf(q^nx)},\,x\in[a,b]. \end{align}
(18)
The aim in this paper is to define a generalize notion of derivative that includes \(q\)-derivative (quantum derivative), \(h\)-derivative (plank derivative), \((p,q)\)-derivative and \(q-h\)-derivative (quantum-plank-derivative). This will be called \((q,p-h)\)-derivative. We derive formulas for \(q-h\)-derivative of sum/difference, product and quotient of two functions. We will give the definition of \((q,p-h)\)-integral, moreover the definitions of \((q,p-h)\)-derivative as well as \((q,p-h)\)-integral are given on a finite interval of the real line.

2. \((q,p-h)\)-Derivatives

Let we define the \((q,p-h)\)-differential of a real valued function \(f\) as follows:
\begin{equation}\label{1} _{h}d^p_qf(x)=f(q(x+h))-f(px). \end{equation}
(19)
Then for \(``h=0"\), \(``p=1"\), \(``h=0,\,p=1"\) and \(``p=1,\,q\to1"\) in (19), we get \((q,p)\)-differential, \((q,p-h)\)-differential, \(q\)-differential and \(h\)-differential respectively as follows: \begin{align*} & _{0}d^p_qf(x)=f(qx)-f(px)=d^p_qf(x), \\ &_{h}d^1_qf(x)=f(q(x+h))-f(x)=\,_{h}d_qf(x), \\& _{0}d^1_qf(x)=f(qx)-f(x)=d_qf(x) \end{align*} and \begin{align*} &_{h}d^1_1f(x)=f(x+h)-f(x)=\,_{h}d_qf(x). \end{align*} In particular,
\begin{equation}\label{21} _{h}d^p_q(x)=qx+qh-px=(q-p)x+qh. \end{equation}
(20)
Then for \(``h=0"\), \(``p=1"\), \(``h=0,\,p=1"\) and \(``p=1,\,q\to1"\) in (20), we have
\begin{equation}\label{2} \begin{cases}_{0}d^p_q(x)=(q-p)x=\,d^p_q(x),\\ _{h}d^1_q(x)=(q-1)x+qh=\,_{h}d_q(x),\\ _{0}d^1_q(x)=(q-1)x=\,d_q(x),\\ _{h}d^1_1(x)=h=d_h(x),\end{cases} \end{equation}
(21)
respectively. For \(S(x)=f(x)+g(x)\) the \((q,p-h)\)-differential of \(S\) is given by;
\begin{align}\label{l1} _{h}d^p_q(S(x))=\,_{h}d^p_q(f(x)+g(x))=(f+g)(q(x+h))-(f+g)(px)=\,_{h}d^p_q f(x)+\,_{h}d^p_q g(x). \end{align}
(22)
For \(\beta\in\mathbb{R}\), the \((q,p-h)\)-differential of \(\beta f\) is given by;
\begin{align}\label{l2} _{h}d^p_q(\beta f)(x)=(\beta f)(q(x+h))-(\beta f)(px)=\beta\, \,_{h}d^p_q f(x). \end{align}
(23)
From (22) and (23), it can be concluded that \((q,p-h)\)-differential is linear. For the product function \(P\) of \(f\) and \(g\) i.e. \(P(x)=f(x)g(x)\), the \((q,p-h)\)-differential is calculated as follows: \begin{align*} _{h}d^p_q(P(x))=\,_{h}d^p_q((fg)(x))&=(fg)(q(x+h))-(fg)(px)\\&=f(q(x+h))g(q(x+h))+f(q(x+h))g(px)\nonumber\\&\quad-f(q(x+h))g(px)-f(px)g(px)\nonumber\\&=f(q(x+h))[g(q(x+h))-g(px)]\nonumber\\&\nonumber\quad+g(px)[f(q(x+h))-f(px)]. \end{align*} Hence we have the following formula for \((q,p-h)\)-differential of product of two functions:
\begin{equation}\label{3} _{h}d^p_q(P(x))=\,_{h}d^p_q(f(x)g(x))=f(q(x+h))_{h}d^p_qg(x)+g(px)_{h}d^p_qf(x). \end{equation}
(24)
For \(``h=0"\), \(``p=1"\), \(``h=0,\,p=1"\) and \(``p=1,\,q\to1"\) in (24), we get \((q,p)\)-differential, \((q,p-h)\)-differential, \(q\)-differential and \(h\)-differential of product \(P\) of functions \(f\) and \(g\), respectively as follows: \begin{align*} _{0}d^p_q(P(x))&=\,_{0}d^p_q(f(x)g(x))=d^p_q(f(x)g(x))\\ &=f(qx)_{0}d^p_qg(x)+g(px)\,_{0}d^p_qf(x)\\ &=f(qx)d^p_qg(x)+g(px)d^p_qf(x)\end{align*} \begin{align*} _{h}d^1_q(P(x))&=\,_{h}d^1_q(f(x)g(x))=\,_{h}d_q(f(x)g(x))\\ &=f(q(x+h))_{h}d^1_qg(x)+g(x)_{h}d^1_qf(x)\\ &=f(q(x+h))_{h}d_qg(x)+g(x)_{h}d_qf(x),\end{align*} \begin{align*} _{0}d^1_q(P(x))&=\,_{0}d^1_q(f(x)g(x))=d_q(f(x)g(x))\\ &=f(qx)_{0}d_qg(x)+g(x)\,_{0}d_qf(x)\\ &=f(qx)d_qg(x)+g(x)d_qf(x),\end{align*} \begin{align*} _{h}d^1_1(P(x))&=\,_{h}d^1_1(f(x)g(x))=d_h(f(x)g(x))\\ &=f(x+h)_{h}d^1_1g(x)+g(x)_{h}d^1_1f(x)\\ &=f(x+h)d_hg(x)+g(x)d_hf(x), \end{align*} respectively. Now, we define composite derivative as follows:

Definition 7. Let \(0< q< p\leq1\), \(h\in\mathbb{R}\) and \(f:I\to \mathbb{R}\) be a continuous function. Then the \((q,p-h)\)-derivative of \(f\) is defined by

\begin{equation}\label{4} \begin{cases} C_{h}D^p_qf(x)=\frac{_{h}D^p_qf(x)}{_{h}d^p_qx}=\frac{f(q(x+h))-f(px)}{(q-p)x+qh},\,x\neq\frac{qh}{p-q}:=x_{\circ}\\C_{h}d^p_qf(x_{\circ})=\lim\limits_{x\to x_{\circ}}\,C_{h}D^p_qf(x). \end{cases}\end{equation}
(25)
For \(h=0\) and \(q\to1\) in (25), we have
\begin{equation}\label{d4} C_{0}D^p_qf(x)=D^p_qf(x)=\frac{d_qf(x)}{d_qx}=\frac{f(qx)-f(px)}{(q-p)x}\,, \end{equation}
(26)
and
\begin{equation} C_{h}D_1f(x)=D_hf(x)=\frac{d_hf(x)}{d_hx}=\frac{f(x+h)-f(x)}{h}, \end{equation}
(27)
respectively. If \(f\) is differentiable and \(h=0,\,q\to1\) in (25), we get the ordinary derivative of \(f\).

Remark 1. It is notable that if we put \(p=1,\,h=\frac{\omega}{q}\) where \(\omega>0\), the Wolfgang Hahn difference operator given in [1]haun} is obtained.

Example 1. The \((q,p-h)\)-derivative of \(x^n\), \(n\in\mathbb{N}\) is calculated as follows:

\begin{align}\label{5} C_{h}D^p_q(x^n)= %\frac{(q(x+h))^n-x^n}{(q-1)x+qh}= \frac{q^n(x+h)^n-p^nx^n}{(q-p)x+qh}=\frac{(q^n-p^n)x^n}{(q-p)x+qh}+\frac{q^n(nx^{n-1}h+...+h^n)}{(q-p)x+qh}. \end{align}
(28)
For \(``p=1"\), \(``h=0"\), \(``h=0,\,p=1"\) and \(``p=1,\,q\to1"\) in (28), we get quantum-plank derivative, \((p,q)\)-derivative, quantum-derivative and plank-derivative of function \(x^n\) respectively as follows:
\begin{align}\label{now5} C_{h}D^1_q(x^n)= \frac{q^n(x+h)^n-x^n}{(q-1)x+qh}=\frac{(q^n-1)x^n}{(q-1)x+qh}+\frac{q^n(nx^{n-1}h+...+h^n)}{(q-1)x+qh}, \end{align}
(29)
\begin{equation} C_{0}D^p_q(x^n)=\frac{q^nx^n-p^nx^n}{(q-1)x}=\frac{q^n-p^n}{q-p}x^{n-1}=[n]_{p,q}x^{n-1}=D^p_q(x^n), \end{equation}
(30)
\begin{equation} C_{0}D^1_q(x^n)=\frac{q^nx^n-x^n}{(q-1)x}=\frac{q^n-1}{q-1}x^{n-1}=[n]x^{n-1}=D_q(x^n), \end{equation}
(31)
and
\begin{equation} C_{h}D^1_1(x^n)=\frac{(x+h)^n-x^n}{h}=nx^{n-1}+\frac{n(n-1)}{2}x^{n-2}h+.....+h^{n-1}. \end{equation}
(32)
In particular, we have \(\lim\limits_{h\to0}C_{h}D^1_1(x^n)=nx^{n-1}\).

2.1. Linearity of \((q,p-h\)-derivative

The \((q,p-h)\)-derivative is linear, for real valued functions \(f, g\) and \(\alpha, \beta\in \mathbb{R}\) one can have linearity from the linearity of \((q,p-h)\)-differentials as follows: \begin{equation*} C_{h}D^p_q(\alpha f(x)+\beta g(x))=\alpha\, C_{h}D^p_qf(x)+\beta\, C_{h}D^p_qg(x). \end{equation*}

2.2. Product formula for \((q,p-h)\)-derivatives

By using the \(q-h\)-differential of product of functions from (24), the product formula is stated as follows:
\begin{align} C_{h}D^p_q(f(x)g(x))& =\frac{_{h}d^p_q(f(x)g(x))}{_{h}d^p_qx}\notag\\ & =\frac{f(q(x+h))_{h}d^p_qg(x)+_{h}d^p_qf(x)g(x)}{_{h}d^p_qx}\notag\\ & = f(q(x+h)) C_{h}D^p_qg(x)+g(x) C_{h}D^p_qf(x). \end{align}
(33)
It generates both \(q\)-derivative product formula and \(h\)-derivative product formula simultaneously as follows: For \(h=0\) we have \(q\)-derivative formula for products of functions is obtained as follows: \begin{align} C_{0}D_q(f(x)g(x))&=\frac{d_q(f(x)g(x))}{d_qx}\notag\\ &=D_q(f(x)g(x))\notag\\&\nonumber=f(qx) C_{0}D_qg(x)+g(x) C_{0}D^p_qf(x)\\&\nonumber=f(qx)D_qg(x)+g(x) D_qf(x). \end{align} For \(q\to1\) we have \(h\)-derivative formula for products of functions is obtained as follows: \begin{align} C_{h}D_1(f(x)g(x))&=\frac{d_h(f(x)g(x))}{d_hx}\notag\\ &=D_h(f(x)g(x))\notag\\&\nonumber=f(x+h) C_{h}D_1g(x)+g(x) C_{h}D_1f(x)\\&\nonumber=f(x+h)D_hg(x)+g(x) D_hf(x). \end{align} By using symmetry we can have from (33):
\begin{align}\label{7} C_{h}d^p_q(g(x)f(x))=g(q(x+h)) C_{h}D^p_qf(x)+f(x) C_{h}d^p_qg(x). \end{align}
(34)
Both (33) and (34) are equivalent.

Remark 2. It is notable that if we put \(p=1,\,h=\frac{\omega}{q}\) for \(\omega>0\), equation (33) provides the product formula for \((q,\omega)\)-derivatives given in [1]haun}.

2.3. Quotient formula for \((q,p-h)\)-derivatives

The quotient formula of \((q,p-h)\)-derivatives for quotient of two functions by using (33) and (34) are given as follows: We have for \(g(x)\neq0\)
\begin{equation}\label{8} g(x)\frac{f(x)}{g(x)}=f(x). \end{equation}
(35)
By taking \(q-h\)-derivative on both sides, we have
\begin{equation}\label{9} C_{h}d^p_q\left(g(x)\frac{f(x)}{g(x)}\right)=C_{h}d^p_q(f(x)). \end{equation}
(36)
By using (33), one can get \begin{equation*} g(q(x+h))C_{h}d^p_q\bigg(\frac{f(x)}{g(x)}\bigg)+\frac{f(x)}{g(x)} C_{h}d^p_qg(x)=C_{h}d^p_q(f(x)). \end{equation*} Now
\begin{align}\label{2.18} C_{h}d^p_q\bigg(\frac{f(x)}{g(x)}\bigg)&=\frac{C_{h}d^p_q(f(x))-\frac{f(x)}{g(x)}C_{h}d^p_q(g(x))}{g(q(x+h))}\notag\\&=\frac{g(x)C_{h}d^p_q(f(x))-{f(x)}C_{h}d^p_q(g(x))}{g(q(x+h))g(x)}. \end{align}
(37)
By using (34), one can get \begin{equation*} \frac{f(q(x+h))}{g(q(x+h))}C_{h}d^p_q\bigg({g(x)}\bigg)+g(x)C_{h}d^p_q\bigg(\frac{f(x)}{g(x)}\bigg)=C_{h}d^p_q\bigg({f(x)}\bigg), \end{equation*} that is: \begin{equation*} C_{h}d^p_q\bigg(\frac{f(x)}{g(x)}\bigg)=\frac{C_{h}d^p_q(f(x))g(q(x+h))-f(q(x+h))C_{h}d^p_q(g(x))}{g(q(x+h))g(x)}. \end{equation*}

Remark 3. It is notable that if we put \(p=1,\,h=\frac{\omega}{q}\) for \(\omega>0\), equation (37) provides the quotient formula for \((q,\omega)\)-derivatives given in [6].

If \(f\) is the \((q,p-h)\)-derivative of \(F\) that is \(f(x)=C_{h}d^p_q F(x)\), then \(F\) will be called the \((q,p-h)\)-anti-derivative of \(f\). The \((q,p-h)\)-anti-derivative will be denoted by \(\int f(x)\, _{h}d^p_q x\).

3. \((q,p-h)\)-derivative on a finite interval

In this section we consider a finite interval \(I:=[a, b]\) for \(a, b\) real numbers. We define \((q,p-h)\)-derivative on this interval in the following definition.

Definition 8. Let \(0< q< p\leq1\), \(h\in\mathbb{R}\) and \(x\in I\). For a continuous function \(f:I\to \mathbb{R}\) the left and right \(q-h-\)derivatives on \(I\) denoted by \(C_{h}D^{a^+}_q f\) and \(C_{h}D^{b_-}_q f\) are defined with the following equations respectively;

\begin{align} & C_{h}D^{a^+}_{p,q} f(x):=\frac{f((1-q)a+q(x+h))-f((1-p)a+px)}{(p-q)(a-x)+qh};\,\,x\neq\frac{qh+(p-q)a}{p-q}:=u,\tag{38}\\\ & C_{h}D^{b_-}_{p,q} f(x):=\frac{f((1-q)x+q(b+h))-f((1-p)x+pb)}{(p-q)(x-b)+qh};\,\,x\neq\frac{-qh+(p-q)b}{p-q}:=v,\tag{39} \end{align}
provided that \((p-q)a+q(x+h)\in[a,x]\) and \((p-q)x+q(b+h)\in[x,b]\). Also, \(C_{h}D^{a^+}_{p,q} f(u)=\lim\limits_{x\to u}C_{h}D^{a^+}_{p,q} f(x)\) and \(C_{h}D^{b_-}_{p,q} f(v)=\lim\limits_{x\to v}C_{h}D^{b_-}_{p,q} f(x)\).

The function \(f\) will be called left \((q,p-h)\)-differentiable on \((a,x+h)\), if \(C_{h}D^{a^+}_{p,q} f(x)\) exists for each of its points, on the other hand \(f\) will be called right \((q,p-h)\)-differentiable on \((x+h,b)\), if \(C_{h}D^{b_-}_{p,q} f(x)\) exists at each of its points. It is noted that \(C_{h}D^{a^+}_{p,q} f(b)=C_{h}D^{b_-}_{p,q} f(a)\). In (38), the value \(h=0\) gives the \((p,q)\)-derivative on interval \(I\) stated in Definition 2, i.e., \(C_{0}D^{a^+}_{p,q} f(x)= {_{a}}D_{p,q} f(x)\); the setting \(h=0,\,p=1\) gives the \(q\)-derivative on interval \(I\) stated in Definition 1, i.e., \(C_{0}D^{a^+}_{1,q} f(x)= {_{a}}D_q f(x)\); the value \(p=1\) gives the \(q-h\)-derivative on interval \(I\) stated in Definition 3, i.e., \(C_{h}D^{a^+}_{1,q} f(x)= C_{h}D^{a^+}_{q} f(x)\). Also for \(a=0\) one can have \(C_{h}D^{0^+}_{p,q} f(x)= C{_{h}}D^p_q f(x)\), i.e., the \((q,p-h)\)-derivative given in (25) is recovered; for \(h=0=a\) one can have \(C_{0}D^{0^+}_{p,q} f(x)= D^p_q f(x)\), i.e., the \((p,q)\)-derivative is recovered; for \(a=0,\,q=p=1\) one can have \(C_{h}D^{0^+}_{1,1} f(x)= D_h f(x)\) i.e., the \(h\)-derivative is recovered; for \(a=0,\,p=1\) one can have \(C_{h}D^{0^+}_{1,q} f(x)= C_{h} D_q f(x)\) i.e., the \(q-h\)-derivative is recovered; for \(h=0=a=p\) and taking limit \(q\to1\) one can get the usual derivative for a differentiable function \(f\) i.e., \(\lim\limits_{q\to 1} C_{0}D^{0^+}_{1,q} f(x)= \frac{d}{dx} f(x)\). The similar consequences can be found from the equation (39). We give the definition of left and right \(q-\)derivatives on \(I\) as follows:

Definition 9. Let \(0< q< p\leq1\), \(h\in\mathbb{R}\) and \(x\in I\). For a continuous function \(f:I\to \mathbb{R}\) the left and right composite \((p,q)\)-derivatives on \(I\) denoted by \(D^{a^+}_{p,q} f\) and \(D^{b_-}_{p,q} f\) are defined with the following equations respectively;

\begin{align} &D^{a^+}_{p,q} f(x):=\frac{f(qx+(1-q)a)-f(px+(1-p)a)}{(p-q)(a-x)};\,\,x> a,\tag{40}\\&D^{b_-}_{p,q} f(x):=\frac{f(qb+(1-q)x)-f(pb+(1-p)x)}{(p-q)(x-b)};\,\,x< b\tag{41}. \end{align}
From (40) we have \(D^{0^+}_{p,q} f(x)=D_{p,q} f(x)\). Next, we give the definition of left and right \((q,p-h)\)-integrals as follows:

Definition 10. Let \(0< q< p\leq1\) and \(f:I=[a,b]\to \mathbb{R}\) be a continuous function. Then the left and right \(q,p-h\)-integrals on \(I\) denoted by \(I^{a+}_{q,p-h} f\) and \(I^{b}_{q,p-h} f\) are defined as follows:

\begin{align} I^{a+}_{q,p-h} f(x):&=\int_{a}^x f(t)\, _{h}d^p_q t\nonumber\\&=((p-q)(x-a)+qh)\sum_{n=0}^{\infty}{\frac{q^n}{p^{n+1}}f\left(\frac{q^n}{p^{n+1}}a+\left(1-\frac{q^n}{p^{n+1}}\right)x+\frac{nq^nh}{p^{n+1}} \right)},\,x> a,\tag{42}\\ I^{b-}_{q,p-h} f(x):&= \int_{x}^b f(t)\, _{h}d^p_q t\notag\\&=((p-q)(b-x)+qh)\sum_{n=0}^{\infty}{\frac{q^n}{p^{n+1}}f\left(\frac{q^n}{p^{n+1}}x+\left(1-\frac{q^n}{p^{n+1}}\right)b+\frac{nq^nh}{p^{n+1}} \right)},\,x< b.\tag{43} \end{align}

Example 2. Let \(f(t)=t-a\) and \(g(t)=b-t\). Then we have

\begin{align}\label{exp1} I^{a+}_{q,p-h} f(x)=&\int_{a}^x (t-a) \, _{h}d^p_q t=\frac{(p-q)(x-a)+qh}{p-q}\notag\\&\times\left(\frac{(p+q-1)(px-a)}{p+q}+\frac{h(p-q)}{p^2}\sum_{n=0}^{\infty}n\left(\frac{q}{p}\right)^{2n}\right) \end{align}
(44)
and
\begin{align}\label{exp2} I^{b-}_{q,p-h} g(x)=&\int_{x}^b (b-t) \, _{h}d^p_q t=\frac{(p-q)(b-x)+qh}{p-q}\notag\\&\times\left(\frac{b-x}{p+q}-\frac{h(p-q)}{p^2}\sum_{n=0}^{\infty}n\left(\frac{q}{p}\right)^{2n}\right). \end{align}
(45)

Example 3. Let \(f(t)=x-t\) and \(g(t)=t-x\). Then we have

\begin{align}\label{exp01} I^{a+}_{q,p-h} f(x)&=\int_{a}^x (x-t) \, _{h}d^p_q t\notag\\ &=\frac{(p-q)(x-a)+qh}{p-q}\left(\frac{x-a}{p+q}-\frac{h(p-q)}{p^2}\sum_{n=0}^{\infty}n\left(\frac{q}{p}\right)^{2n}\right) \end{align}
(46)
and
\begin{align}\label{exp02} I^{b-}_{q,p-h} g(x)&=\int_{x}^b (t-x) \, _{h}d^p_q t\notag\\ &=\frac{(p-q)(b-x)+qh}{p-q}\left(\frac{(p+q-1)(b-x)}{p+q}+\frac{h(p-q)}{p^2}\sum_{n=0}^{\infty}n\left(\frac{q}{p}\right)^{2n}\right). \end{align}
(47)
By considering \(h=0\) the corresponding left and right \((p,q)\)-integrals are defined as follows:

Definition 11. Let \(0< q< p\leq1\) and \(f:I=[a,b]\to \mathbb{R}\) be a continuous function. Then the left and right \((p,q)\)-integrals on \(I\) denoted by \(I^{a+}_{p,q} f\) and \(I^{b}_{p,q} f\) are defined as follows:

\begin{align}\label{i100} I^{a+}_{q,p-0} f(x):& =I^{a+}_{p,q} f(x)=\int_{a}^x f(t)d^p_q t=(p-q)(x-a)\sum_{n=0}^{\infty}{\frac{q^n}{p^{n+1}}f\left(\frac{q^n}{p^{n+1}}a+\left(1-\frac{q^n}{p^{n+1}}\right)x\right)},\,x> a,\tag{48}\\ I^{b-}_{q,p-0} f(x):&=I^{b-}_{p,q} f(x)= \int_{x}^b f(t)d^p_q t=(p-q)(b-x)\sum_{n=0}^{\infty}{\frac{q^n}{p^{n+1}}f\left(\frac{q^n}{p^{n+1}}x+\left(1-\frac{q^n}{p^{n+1}}\right)b\right)},\,x< b.\tag{49} \end{align}
The left \((p,q)\)-integral is equivalent to the \((p,q)\)-definite integral defined in [3]. For \(p=1\); the left \((p,q)\)-integral is equivalent to the \(q_a\)-definite integral defined in [2], while the right \((p,q)\)-integral is defined in [1] which is called \(q^b\)-definite integral.

Example 4. Let \(f(t)=t-a\) and \(g(t)=b-t\). Then from Example 2 for \(h=0\) we have \(I^{a+}_{q,p-0} f(x)=I^{a+}_{p,q} f(x)=\int_{a}^x (t-a)d^p_q t=\frac{(p+q-1)(px-a)(x-a)}{p+q}\) and \(I^{b-}_{q,p-0} g(x)=I^{b-}_{p,q} f(x)= \int_{x}^b (b-t)d^p_q t=\frac{(b-x)^2}{p+q}\).

By considering \(p=1,\,q\to1\) the corresponding left and right \(h\)-integrals are defined as follows:

Definition 12. Let \(f:I=[a,b]\to \mathbb{R}\) be a continuous function. Then the left and right \(h\)-integrals on \(I\) denoted by \(I^{a+}_{h} f\) and \(I^{b}_{h} f\) are defined as follows:

\begin{align}\label{i01} &I^{a+}_{h} f(x)=\lim\limits_{q\to 1} I^{a+}_{q,1-h} f(x),\,x> a,\tag{50}\\& I^{b-}_{h} f(x)= \lim\limits_{q\to 1} I^{b-}_{q,1-h} f(x),\,x< b.\tag{51} \end{align}
It is noted from Definition 10 that \(I^{a+}_{q,p-h} f(b)=I^{b-}_{q,p-h} f(a)=\int_{a}^{b} f(t)\,_{h}d^p_{q} t\)

Conflicts of Interest:

The author declares no conflict of interest.

Data Availability:

All data required for this research is included within this paper.

Funding Information:

This research is funded by Higher Education Commission of Pakistan.

References

  1. Bermudo, S., Kórus, P., & Nápoles Valdés, J. E. (2020). On \(q-\)Hermite-Hadamard inequalities for general convex functions. Acta Mathematica Hungarica, 162, 364-374.[Google Scholor]
  2. Tariboon, J., & Ntouyas, S. K. (2014). Quantum integral inequalities on finite intervals. Journal of Inequalities and Applications, 2014, Article No. 121. [Google Scholor]
  3. Tunç, M., & Göv, E. (2016). \((p, q)\)-integral inequalities. RGMIA, 19, 1-13. [Google Scholor]
  4. Tunç, M., & Göv, E. (2021). Some integral inequalities via (p, q)-calculus on finite intervals. Filomat, 35(5), 1421-1430. [Google Scholor]
  5. Farid, G., Anwar, M., & Shoaib, M., (2023). On generalizations of \(q-\) and \(h-\)integrals and some related inequalties. Preprint.
  6. Hahn, W. (1983). Ein beitrag zur theorie der orthogonalpolynome. Monatshefte Für Mathematik, 95(1), 19-24.[Google Scholor]
  7. Farid, G. (2019). Some Riemann-Liouville fractional integral inequalities for convex functions. The Journal of Analysis, 27(4), 1095-1102.[Google Scholor]
  8. Kac, V., & Cheung, V. (2002). Quantum Calculus. Springer, New York, NY. [Google Scholor]
]]>
Explicit formula for the \(n\)-th derivative of a quotient https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/explicit-formula-for-the-n-th-derivative-of-a-quotient/ Fri, 30 Dec 2022 18:51:09 +0000 https://old.pisrt.org/?p=6949
OMA-Vol. 6 (2022), Issue 2, pp. 120 - 129 Open Access Full-Text PDF
Roudy El Haddad
Abstract:Leibniz's rule for the \(n\)-th derivative of a product is a very well-known and extremely useful formula. This article introduces an analogous explicit formula for the \(n\)-th derivative of a quotient of two functions. Later, we use this formula to derive new partition identities and to develop expressions for some particular \(n\)-th derivatives. ]]>

Open Journal of Mathematical Analysis

Explicit formula for the \(n\)-th derivative of a quotient

Roudy El Haddad
Université La Sagesse, Faculté de génie, Polytech; roudy1581999@live.com

Abstract

Leibniz’s rule for the \(n\)-th derivative of a product is a very well-known and extremely useful formula. This article introduces an analogous explicit formula for the \(n\)-th derivative of a quotient of two functions. Later, we use this formula to derive new partition identities and to develop expressions for some particular \(n\)-th derivatives.

Keywords:

\(n\)-th derivative of a quotient; Generalized quotient rule; Partitions.

1. Introduction

If we chose two functions \(u\) and \(v\), and went around asking mathematicians to compute the \(n\)-th derivative of their product, the first idea that would come to their mind is to use Leibniz's formula. However, what if we asked them to compute the \(n\)-th derivative of the quotient instead? What formula would come to their mind? For many, the answer is none. Therefore, a large portion of the mathematical community needs to be made aware of the existence of such a formula.

This is because, while Leibniz's formula is a subject studied in practically all calculus courses, the topic is rarely discussed when it comes to talking about an analogous formula for the quotient of two functions. Although many wonders if such a formula exists, little work has been done on the subject. In 1967 [1], the first step was taken as a more straightforward question was answered; that is, a recursive formula for the \(n\)-th derivative of \(1/f(x)\) was presented. Later, in 1980, F. Gerrish [2] noticed an interesting pattern linking the \(n\)-th derivative of a quotient to a notable determinant. In 2008, this special connection was used to establish a recursive formula for such a derivative. So if such a formula exists, why do most of us not know about it? There are two primary reasons: The first is that the existing formulas could be more practical as they are recursive rather than explicit. The second is that such a formula was thought to be useless.

F. Gerrish [2] even went as far as calling it the ``Useless Formula''. However, since then, this formula has found various applications and has been used to deal with a multitude of topics [3, 4, 5, 6, 7, 8, 9]. Hence, in this article, we propose revisiting the subject and developing an explicit formula for the \(n\)-th derivative of a quotient analogous to the generalized product rule. We hope this formula will become a standard like Leibniz's rule.

More precisely, it will be analogous to the generalized product rule for the product of several functions. Note that we mean by analogous that the formula will be explicit and have the same form (that is, it will be in terms of a sum over partitions). Let us begin by noting that, in the same way, Leibniz's formula is often referred to as the product rule; in this article, for simplicity, we will refer to the formula for the \(n\)-th derivative of a quotient as the quotient rule.

We will begin by deriving a new formula for the \(n\)-th derivative of \(1/f(x)\) (S2). Although such a formula already exists, the formula presented in [1] is rather complicated. Therefore, we propose a simpler formula involving partitions. The formula we will present also has the advantage of being explicit rather than recursive. We will refer to this particular case as the common rule.

Similarly, although a recursive formula already exists for the quotient rule, no explicit formula exists. Therefore, in S3, by combining the reciprocal formula with Leibniz's formula, we develop an explicit formula for the \(n\)-th derivative of the quotient of two functions. Finally, in S4, we apply the reciprocal and quotient rules developed to derive new partition identities and expressions for some special \(n\)-th derivatives.

2. n-th derivative of \(1/v(x)\) (Reciprocal rule)

We begin by introducing the concept of partitions as partitions are an essential part of the quotient rule we will develop. As defined by the author in [10, 11], a partition can be defined as follows:

Definition 1. A partition of a non-negative integer \(m\) is a set of positive integers whose sum equals \(m\). We can represent a partition of \(m\) as an ordered set \((y_{k,1},\ldots,y_{k,m})\) that verifies

\begin{equation} y_{k,1}+2y_{k,2}+ \cdots + my_{k,m} =\sum_{i=1}^{m}{i\,y_{k,i}} =m. \end{equation}
(1)

The coefficient \(y_{k,i}\) is the multiplicity of the integer \(i\) in the \(k\)-th partition of \(m\). Note that \(0\leq y_{k,i}\leq m\) while \(1\leq i \leq m\). Also note that the number of partitions of an integer \(m\) is given by the partition function denoted \(p(m)\) and hence, \(1 \leq k \leq p(m)\). In the remainder of this text, the subscript \(k\) will be added to indicate that a given parameter is associated with a given partition. Similarly, for simplicity, we will omit the bounds of \(i\) and write \(\sum{iy_{k,i}}=m\) and \(\sum{y_{k,i}}\). Furthermore, we define the following partition notation:

\begin{equation} \label{pi_k} \pi_k=\sum{iy_{k,i}}, \end{equation}
(2)
\begin{equation} \label{r_k} r_k=\sum{y_{k,i}}. \end{equation}
(3)
As partitions are not the main focus of this article, we will not go into more details. For readers interested in a more in-depth explanation about partitions, see [12].

Before we begin proving the main results of this section, let us introduce the following notation: In the remainder of this article, the letters \(u\) and \(v\) will be used to indicate a function of \(x\). In other words, \(u\) represents \(u(x)\) and \(v\) represents \(v(x)\).

Definition 2. Let us define the following shorthand notation:

\begin{equation} \left(v\right)^{(n)}=v^{(n)}=\frac{d^n}{dx^n}\left(v(x)\right). \end{equation}
(4)
In order to prove the reciprocal rule, we need to first prove the following lemma.

Lemma 1. We have that \begin{equation*} \begin{split} \sum_{j=0}^{n-1}{\binom{\sum{Y_{k,i}}-1}{Y_{k,1}, \ldots, Y_{k,{n-j}}-1, \ldots, Y_{k,{n}}}} =\sum_{j=1}^{n}{\binom{\sum{Y_{k,i}}-1}{Y_{k,1}, \ldots, Y_{k,{j}}-1, \ldots, Y_{k,{n}}}} =\binom{\sum{Y_{k,i}}}{Y_{k,1}, \ldots, Y_{k,{n}}} .\end{split} \end{equation*}

Proof. \begin{equation*} \begin{split} \sum_{j=0}^{n-1}{\binom{\sum{Y_{k,i}}-1}{Y_{k,1}, \ldots, Y_{k,{n-j}}-1, \ldots, Y_{k,{n}}}} &=\sum_{j=0}^{n-1}{\frac{(\sum{Y_{k,i}}-1)!}{Y_{k,1}! \cdots Y_{k,{n-j}}! \cdots Y_{k,{n}}!}(Y_{k,{n-j}})} \\ &=\frac{(\sum{Y_{k,i}}-1)!}{Y_{k,1}! \cdots Y_{k,{n}}!}\sum_{j=1}^{n}{(Y_{k,{j}})} \\ &=\frac{(\sum{Y_{k,i}})!}{Y_{k,1}! \cdots Y_{k,{n}}!}=\binom{\sum{Y_{k,i}}}{Y_{k,1}, \ldots, Y_{k,{n}}} .\end{split} \end{equation*}

Using the recursive formula for the quotient rule [13], we derive the reciprocal rule.

Theorem 1.[Reciprocal rule] Let \(v\) be an \(n\) times differentiable function of \(x\), for any \(n\in\mathbb{N}\) and at every point where \(v\neq0\), we have that \begin{equation*} \left(\frac{1}{v}\right)^{(n)} =\frac{d^n}{dx^n}\left(\frac{1}{v}\right) =n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}. \end{equation*}

Remark 1. A very interesting and compact way of rewriting this theorem is as follows: \begin{equation*} \left(\frac{1}{v}\right)^{(n)} =n!\sum_{\sum{iy_{k,i}}=n}{C_{k}\prod_{i=1}^{n}{\frac{1}{y_{k,i}!}\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}} \end{equation*} where

\begin{equation} \label{C_k} C_k =\frac{d^{\sum{y_{k,i}}}}{dv^{\sum{y_{k,i}}}}\left(\frac{1}{v}\right) =\frac{(\sum{y_{k,i}})!(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}} =\frac{(-1)^{r_k}r_k!}{v^{r_k+1}} .\end{equation}
(5)
As we can see, the general reciprocal rule using the \(C_k\) notation is simple and easy to memorize.

Remark 2. Let us define the notation \(\{a\}_b\) corresponds to writing \(b\) times the value \(a\). Let \(I_k=(\{1\}_{y_{k,1}}, \ldots, \{n\}_{y_{k,n}})\). Similarly, let \(P_k=(y_{k,1}, \ldots, y_{k,n})\). Other interesting ways of writing the theorem are: \begin{equation*} \left(\frac{1}{v}\right)^{(n)} =\sum_{\sum{iy_{k,i}}=n}{\binom{n}{I_k} C_{k}\prod_{i=1}^{n}{\frac{\left[v^{(i)}\right]^{y_{k,i}}}{y_{k,i}!}}} =\frac{1}{v}\sum_{\sum{iy_{k,i}}=n}{\binom{n}{I_k}\binom{\sum{y_{k,i}}}{P_k} \prod_{i=1}^{n}{\left[-\frac{v^{(i)}}{v}\right]^{y_{k,i}}}} .\end{equation*}

Proof. 1. Base case: verify true for \(n=1\). \begin{equation*} 1!\sum_{\sum{iy_{k,i}}=1}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}} =\binom{1}{1}\frac{(-1)^1}{v^2}\left[\frac{v^{'}}{1!}\right]^{1} =-\frac{v'}{v^2} =\frac{d}{dx}\left(\frac{1}{v}\right) .\end{equation*}

Remark 3. We can also verify true for \(n=0\). It is important to note that the partition assumed to correspond to zero is \((0, 0, \ldots)\). Hence, \begin{equation*} 0!\sum_{\sum{iy_{k,i}}=0}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}} =\binom{0}{0,0, \ldots}\frac{(-1)^0}{v^1}\left(1\right) =\frac{1}{v} =\left(\frac{1}{v}\right)^{(0)} .\end{equation*}

2. Induction hypothesis: assume the statement is true until \((n-1)\in\mathbb{N}\). \begin{equation*} \left(\frac{1}{v}\right)^{(n-1)} =(n-1)!\sum_{\sum{iy_{k,i}}=n-1}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,{n-1}}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n-1}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}. \end{equation*} 3. Induction step: we will show that this statement is true for \(n\). We have to show the following statement to be true: \begin{equation*} \left(\frac{1}{v}\right)^{(n)} =n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,{n}}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}. \end{equation*} Using the recursive formula developped in [13] with \(u=1\), we have \begin{equation*} \left(\frac{1}{v}\right)^{(n)} =\frac{(-1)n!}{v}\sum_{j=1}^{n}{\frac{v^{(n+1-j)}}{(n+1-j)!}\frac{\left(\frac{1}{v}\right)^{(j-1)}}{(j-1)!}} =\frac{(-1)n!}{v}\sum_{j=0}^{n-1}{\frac{v^{(n-j)}}{(n-j)!}\frac{\left(\frac{1}{v}\right)^{(j)}}{j!}}. \end{equation*} Applying the induction hypothesis, we get \begin{equation*} \begin{split} \left(\frac{1}{v}\right)^{(n)} &=\frac{(-1)n!}{v}\sum_{j=0}^{n-1}{\frac{v^{(n-j)}}{(n-j)!} \sum_{\sum{iy_{k,i}}=j}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,{j}}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{j}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}}\\ &=n!\sum_{j=0}^{n-1}{\frac{v^{(n-j)}}{(n-j)!} \sum_{\sum{iy_{k,i}}=j}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,{j}}}\frac{(-1)^{\sum{y_{k,i}}+1}}{v^{\sum{y_{k,i}}+2}}\prod_{i=1}^{j}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} .\end{split} \end{equation*} Let us define an extension \((y_{k,1},\ldots,y_{k,n})\) of \((y_{k,1},\ldots,y_{k,j})\) where \(y_{k,j+1}=\cdots=y_{k,n}=0\). Hence, we can write that \begin{equation*} \begin{split} \left(\frac{1}{v}\right)^{(n)} &=n!\sum_{j=0}^{n-1}{\frac{v^{(n-j)}}{(n-j)!} \sum_{\sum{iy_{k,i}}=j}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,{n}}}\frac{(-1)^{\sum{y_{k,i}}+1}}{v^{\sum{y_{k,i}}+2}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}}\\ &=n!\sum_{j=0}^{n-1}{\sum_{\sum{iy_{k,i}}+(n-j)\cdot1=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,{n}}}\frac{(-1)^{\sum{y_{k,i}}+1}}{v^{\sum{y_{k,i}}+2}}{\left[\frac{v^{(n-j)}}{(n-j)!}\right]}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} .\end{split} \end{equation*} Notice that \(1 \leq n-j \leq n\) as \(0 \leq j \leq n-1\). Now, for all \((n-j)\in[1,n]\), let us associate with each partition \((y_{k,1}, \ldots, y_{k,n})\), the partition \((Y_{k,1}, \ldots, Y_{k,n})\) such that \begin{equation*} \begin{cases} Y_{k,i}=y_{k,i}+1, & \text{for } i=n-j, \\ Y_{k,i}=y_{k,i}, & \text{otherwise}. \end{cases} \end{equation*} Notice that \(\sum{Y_{k,i}}=\sum{y_{k,i}}+1\) and that \(\sum{iY_{k,i}}=n\). Hence, we can write \begin{equation*} \begin{split} \left(\frac{1}{v}\right)^{(n)} &=n!\sum_{j=0}^{n-1}{\sum_{\sum{iY_{k,i}}=n}{\binom{\sum{Y_{k,i}}-1}{Y_{k,1}, \ldots, Y_{k,{n-j}}-1, \ldots, Y_{k,{n}}}\frac{(-1)^{\sum{Y_{k,i}}}}{v^{\sum{Y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}}} \\ &=n!\sum_{\sum{iY_{k,i}}=n}{\frac{(-1)^{\sum{Y_{k,i}}}}{v^{\sum{Y_{k,i}}+1}}\left(\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}\right)}\sum_{j=0}^{n-1}{\binom{\sum{Y_{k,i}}-1}{Y_{k,1}, \ldots, Y_{k,{n-j}}-1, \ldots, Y_{k,{n}}}} .\end{split} \end{equation*} Applying Lemma 1 to the inner sum, we obtain \begin{equation*} \left(\frac{1}{v}\right)^{(n)} =n!\sum_{\sum{iY_{k,i}}=n}{\binom{\sum{Y_{k,i}}}{Y_{k,1}, \ldots, Y_{k,{n}}}\frac{(-1)^{\sum{Y_{k,i}}}}{v^{\sum{Y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}} .\end{equation*} This concludes our proof by induction.

Remark 4. As we can see, the reciprocal rule derived (Theorem 1) is very similar to the product rule for the product of several functions:

\begin{equation} \left(u_1 \cdots u_m \right)^{(n)} =\sum_{\ell_1+\cdots+\ell_m=n}{\binom{n}{\ell_1,\ldots,\ell_m}\prod_{i=1}^{m}{u_i^{(\ell_i)}}} \end{equation}
(6)
There exists other alternatives for proving Theorem 1. In what follows, we will present a few propositions that will be useful for doing so. First, let us prove the following useful proposition for the derivative of a product.

Proposition 1. Let \(u_1\), \(\ldots\), \(u_n\) be differentiable functions of \(x\), we have that \begin{equation*} \frac{d}{dx}\left(\prod_{i=1}^{n}{u_i}\right) =\left(\prod_{i=1}^{n}{u_i}\right)\sum_{j=1}^{n}{\frac{u_{i}^{'}}{u_i}} .\end{equation*}

Proof. Let \(f(x)=u_1 \cdots u_n\). Taking the logarithm of both sides, we get \begin{equation*} \ln f(x) =\ln\left(\prod_{i=1}^{n}{u_i}\right) =\sum_{i=1}^{n}{\ln u_i}. \end{equation*} Differentiating both sides, we get \begin{equation*} -\frac{f'(x)}{f(x)} =-\sum_{i=1}^{n}{\frac{u_{i}^{'}}{u_i}}. \end{equation*} Canceling the minus sign, we obtain the desired formula.

Now we prove the following partition identity involving a special sum of multinomial coefficients. This expression is equivalent to Lemma 1 that we used to prove Theorem 1.

Proposition 2. We have that \begin{equation*} \sum_{\substack{\sum{\varphi_i}=\sum{Y_i}-1 \\ \varphi_i \leq Y_i}}{\binom{\sum{Y_i}-1}{\varphi_1, \ldots, \varphi_n}} =\binom{\sum{Y_i}}{Y_1, \ldots, Y_n} .\end{equation*}

Proof. Let \(Z_i=Y_i-\varphi_i\) for \(1 \leq i \leq n\). \begin{equation*} \begin{split} \sum_{\substack{\sum{\varphi_i}=\sum{Y_i}-1 \\ \varphi_i \leq Y_i}}{\binom{\sum{Y_i}-1}{\varphi_1, \ldots, \varphi_n}} &=\frac{(\sum{Y_i}-1)!}{Y_1! \cdots Y_n!}\sum_{\substack{\sum{\varphi_i}=\sum{Y_i}-1 \\ \varphi_i \leq Y_i}}{\frac{Y_1! \cdots Y_n!}{\varphi_1! \cdots \varphi_n!}}\\ &=\frac{(\sum{Y_i}-1)!}{Y_1! \cdots Y_n!}\sum_{\substack{\sum{Z_i}=1 \\ Z_i \geq 0}}{\frac{Y_1! \cdots Y_n!}{(Y_1-Z_1)! \cdots (Y_n-Z_n)!}} .\end{split} \end{equation*} Knowing that the \(Z_i\)'s are non-negative integers, the only way for their sum to be equal to 1 is if one of them is equal to 1 and the others are equal to 0. Hence, we have that \begin{equation*} \sum_{\substack{\sum{Z_i}=1 \\ Z_i \geq 0}}{\frac{Y_1! \cdots Y_n!}{(Y_1-Z_1)! \cdots (Y_n-Z_n)!}} =\sum_{i=1}^{n}{\frac{Y_i!}{(Y_i-1)!}} =\sum_{i=1}^{n}{Y_i} .\end{equation*} Substituting back, we obtain the proposition.

3. n-th derivative of \(u(x)/v(x)\) (Quotient rule)

In this section, we combine Theorem 1 with Leibniz's rule to obtain the general quotient rule.

Theorem 2.[Quotient rule] Let \(u\) and \(v\) be \(n\) times differentiable functions of \(x\), for any \(n\in\mathbb{N}\) and at every point where \(v\neq0\), we have that \begin{equation*} \begin{split} \left(\frac{u}{v}\right)^{(n)} =\frac{d^n}{dx^n}\left(\frac{u}{v}\right) &=n!\sum_{\ell=0}^{n}{\frac{u^{(n-\ell)}}{(n-\ell)!}\sum_{\sum{iy_{k,i}}=\ell}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,\ell}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{\ell}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}}\\ &=n!\sum_{\pi_k=0}^{n}{\frac{u^{(n-\pi_k)}}{(n-\pi_k)!}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,\pi_k}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{\pi_k}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} .\end{split} \end{equation*}

Proof. Applying Leibniz's rule to Theorem 1, we obtain this theorem.

Remark 5. A much more compact way of expressing this theorem is as follows: \begin{equation*} \left(\frac{u}{v}\right)^{(n)} =n!\sum_{\ell=0}^{n}{\sum_{\sum{iy_{k,i}}=n-\ell}{C_k\left[\frac{u^{(\ell)}}{\ell!}\right]\prod_{i=1}^{n-\ell}{\frac{1}{y_{k,i}!}\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} =n!\sum_{\pi_k=0}^{n}{{C_k\frac{u^{(n-\pi_k)}}{(n-\pi_k)!}\prod_{i=1}^{\pi_k}{\frac{1}{y_{k,i}!}\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} \end{equation*} where \(C_k\) is as defined in Eq. (5) and \(\pi_k\) is as defined in Eq. (2). Using the \(C_k\) and \(\pi_k\) notation, we obtain a simple and easy to memorize expression for the general quotient rule that could potentially be taught to university students at the same time as the general product rule.

Remark 6. Let \(I_k=(\{1\}_{y_{k,1}}, \ldots, \{\pi_k\}_{y_{k,\pi_k}})\) and \(P_k=(y_{k,1},\ldots, y_{k,\pi_k})\). We can write the following interesting but not very practical expressions: \begin{equation*} \begin{split} \left(\frac{u}{v}\right)^{(n)} &=\sum_{\pi_k=0}^{n}{{C_{k} \binom{n}{I_k,n-\pi_k} u^{(n-\pi_k)}\prod_{i=1}^{\pi_k}{\frac{\left[v^{(i)}\right]^{y_{k,i}}}{y_{k,i}!}}}} \\ &=\sum_{\pi_k=0}^{n}{{\binom{n}{I_k,n-\pi_k}\binom{\sum{y_{k,i}}}{P_k} \frac{u^{(n-\pi_k)}}{v}\prod_{i=1}^{\pi_k}{\left[-\frac{v^{(i)}}{v}\right]^{y_{k,i}}}}} .\end{split} \end{equation*}

4. Applications

Let us first define some notation to simplify the expressions we will derive. For a given partition \((y_{k,1}, \ldots, y_{k,n})\) of \(n\), we define the following notation:
\begin{align} c_k& =\prod_{i=1}^{n}{\frac{1}{i^{y_{k,i}}y_{k,i}!}}, & \overline{c}_k&= \prod_{i=1}^{n}{\frac{(-1)^{y_{k,i}}}{i^{y_{k,i}}y_{k,i}!}}. \end{align}
(7)
\begin{align} p_k& =\prod_{i=1}^{n}{\frac{1}{i!^{y_{k,i}}y_{k,i}!}}, & \overline{p}_k&= \prod_{i=1}^{n}{\frac{(-1)^{y_{k,i}}}{i!^{y_{k,i}}y_{k,i}!}}. \end{align}
(8)
\begin{align} q_k& =\prod_{i=1}^{n}{\frac{1}{i!^{y_{k,i}}}}, & \overline{q}_k&= \prod_{i=1}^{n}{\frac{(-1)^{y_{k,i}}}{i!^{y_{k,i}}}}. \end{align}
(7)

4.1. Partition identities

In this section, we will show how the quotient rule developed can be used to derive partition identities. In particular, we will derive a few special partition identities.

Proposition 3. For any \(n\in\mathbb{N}\), we have that \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1},\ldots,y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\frac{1}{i!^{y_{k,i}}}}} =\frac{(-1)^n}{n!} .\end{equation*} Using the notation, this proposition can be expressed as \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{\binom{r_k}{y_{k,1},\ldots,y_{k,n}}(-1)^{r_k}q_k} =\sum_{\sum{iy_{k,i}}=n}{\binom{r_k}{y_{k,1},\ldots,y_{k,n}}\overline{q}_k} =\frac{(-1)^n}{n!} .\end{equation*}

Remark 7. We can also rewrite it as follows: \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{r_k!(-1)^{r_k}\prod_{i=1}^{n}{\frac{1}{i!^{y_{k,i}}y_{k,i}!}}} =\frac{(-1)^n}{n!} .\end{equation*} Using the notation, this proposition can be expressed as \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{r_k!(-1)^{r_k}p_k} =\sum_{\sum{iy_{k,i}}=n}{r_k!\overline{p}_k} =\frac{(-1)^n}{n!} .\end{equation*}

Proof. From Theorem 1 with \(v(x)=e^x\) and knowing that \(v^{(i)}(x)=e^x\) for all \(i\), we get \begin{equation*} \begin{split} \frac{d^n}{dx^n}{\left(\frac{1}{e^x}\right)} &=n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{(e^x)^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n}{\left[\frac{e^x}{i!}\right]^{y_{k,i}}}} \\ &=n!e^{-x}\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1},\ldots,y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\frac{1}{i!^{y_{k,i}}}}}. \end{split} \end{equation*} Noticing that \begin{equation*} \frac{d^n}{dx^n}{\left(\frac{1}{e^x}\right)} =\frac{d^n}{dx^n}{\left({e^{-x}}\right)} =(-1)^n e^{-x}, \end{equation*} we obtain the proposition.

Proposition 4. For any \(n\in\mathbb{N}\) and any \(m\in\mathbb{N}^*\), we have that \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\left[\binom{m}{i}\right]^{y_{k,i}}}} =(-1)^n \binom{n+m-1}{m-1}. \end{equation*}

Proof. Let \(v(x)=x^m\), then \(v^{(i)}=i!\binom{m}{i}x^{m-i}\). Hence, from Theorem 1, we have \begin{equation*} \begin{split} \frac{d^n}{dx^n}{\left(\frac{1}{x^m}\right)} &=n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{x^{m\sum{y_{k,i}}+ m}}\prod_{i=1}^{n}{\left[x^{m-i}\binom{m}{i}\right]^{y_{k,i}}}} \\ &=n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{x^{m\sum{y_{k,i}}+m}}(x^{m\sum{y_{k,i}}-n})\prod_{i=1}^{n}{\left[\binom{m}{i}\right]^{y_{k,i}}}} \\ &=n!x^{-m-n}\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\left[\binom{m}{i}\right]^{y_{k,i}}}}. \end{split} \end{equation*} Noticing that \begin{equation*} \frac{d^n}{dx^n}{\left(\frac{1}{x^m}\right)} =\frac{d^n}{dx^n}{\left({x^{-m}}\right)} =(-1)^n n!\binom{n+m-1}{m-1} x^{-m-n}, \end{equation*} we obtain the proposition.

Corollary 1. Setting \(m=n\), we get \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\left[\binom{n}{i}\right]^{y_{k,i}}}} =(-1)^n \binom{2n-1}{n-1}. \end{equation*}

Proposition 5. For any \(n\in\mathbb{N}\) and any \(m\in\mathbb{N}^*\), we have that \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\left[\binom{i+m-1}{m-1}\right]^{y_{k,i}}}} =(-1)^n \binom{m}{n}. \end{equation*}

Proof. Let \(v(x)=x^{-m}\), then \(v^{(i)}=(-1)^{i}i!\binom{i+m-1}{m-1}x^{-(m+i)}\). Hence, from Theorem 1, we have \begin{equation*} \begin{split} \frac{d^n}{dx^n}{\left(\frac{1}{x^{-m}}\right)} &=n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{x^{-m\sum{y_{k,i}} - m}}\prod_{i=1}^{n}{\left[(-1)^i\binom{i+m-1}{m-1}x^{-(m+i)}\right]^{y_{k,i}}}} \\ &=n!(-1)^n\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}(x^{-m\sum{y_{k,i}}-n})}{x^{-m\sum{y_{k,i}}-m}}\prod_{i=1}^{n}{\left[\binom{i+m-1}{m-1}\right]^{y_{k,i}}}} \\ &=(-1)^n n!x^{m-n}\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}(-1)^{\sum{y_{k,i}}}\prod_{i=1}^{n}{\left[\binom{i+m-1}{m-1}\right]^{y_{k,i}}}}. \end{split} \end{equation*} Noticing that \begin{equation*} \frac{d^n}{dx^n}{\left(\frac{1}{x^m}\right)} =\frac{d^n}{dx^n}{\left({x^{-m}}\right)} =n!\binom{m}{n} x^{m-n}, \end{equation*} we obtain the proposition.

An extremely interesting result that can be derived from Proposition 5 is that for the alternating sum over partitions of multinomial coefficients.

Corollary 2. Setting \(m=1\), we get \begin{equation*} \sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}(-1)^{\sum{y_{k,i}}}} =(-1)^n \binom{1}{n} = \begin{cases} (-1)^n, & n=0,1,\\ 0, & n \geq 2. \end{cases} \end{equation*}

4.2. Special n-th derivatives

Because of the absence of a general quotient rule, there were many \(n\)-th derivatives for which we could not obtain an explicit expression. In this section, we will use the quotient rule derived to develop an expression for some of these derivatives. The first special \(n\)-th derivative is that of \(\log_{x}{a}\) as well as that of the reciprocal of \(\ln x\). In 2014, Feng Qi [14] intrwe will use the quotient ruleoduced the following expression for the reciprocal of \(\ln x\):
\begin{equation} \label{Feng} \left(\frac{1}{\ln x}\right)^{(n)} =\frac{(-1)^n}{x^n}\sum_{i=2}^{n+1}{\frac{a_{n,i}}{(\ln x)^i}}, \end{equation}
(10)
where
\begin{equation} a_{n,2}=(n-1)! \end{equation}
(11)
and, for \(n+1\geq i \geq 3\),
\begin{equation} a_{n,i}=(i-1)!(n-1)!\sum_{\ell_1=1}^{n-1}{\frac{1}{\ell_1}\sum_{\ell_2=1}^{\ell_1-1}{\frac{1}{\ell_2}\cdots \sum_{\ell_{i-3}=1}^{\ell_{i-4}-1}{\frac{1}{\ell_{i-3}}\sum_{\ell_{i-2}=1}^{\ell_{i-3}-1}{\frac{1}{\ell_{i-2}}}}}}. \end{equation}
(12)
The expression seems simple, however, the \(a_{n,i}\) terms correspond to a kind of multiple harmonic sum. Such sums are very tedious to compute, thus, making Eq. (10) a bit tedious to use. In what follows, using the general reciprocal rule, we will derive a simpler expression.

Proposition 6. For any \(a\in\mathbb{N^*}\), the \(n\)-th derivative of \(\log_{x}{a}\) is given by \begin{equation*} \left(\log_{x}{a}\right)^{(n)} =\left(\frac{\ln a}{\ln x}\right)^{(n)} =(\log_{x}{a})\frac{(-1)^n n!}{x^n}\sum_{\sum{iy_{k,i}}=n}{\frac{({\sum{y_{k,i}}})!}{(\ln x)^{\sum{y_{k,i}}}}\prod_{i=1}^{n}{\frac{1}{i^{y_{k,i}}y_{k,i}!}}}. \end{equation*} Using the notation, we can rewrite it as follows: \begin{equation*} \left(\log_{x}{a}\right)^{(n)} =\left(\frac{\ln a}{\ln x}\right)^{(n)} =(\log_{x}{a})\frac{(-1)^n n!}{x^n}\sum_{\sum{iy_{k,i}}=n}{c_k\frac{r_k!}{(\ln x)^{r_k}}} .\end{equation*}

Proof. From Theorem 1 with \(v=\ln x\), we have \begin{equation*} \begin{split} \left(\log_{x}{a}\right)^{(n)} =\left(\frac{\ln a}{\ln x}\right)^{(n)} &=n!\frac{\ln a}{\ln x}\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{(\ln x)^{\sum{y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{(\ln x)^{(i)}}{i!}\right]^{y_{k,i}}}}. \end{split} \end{equation*} Knowing that, for \(i \geq 1\), \begin{equation*} (\ln x)^{(i)}=\frac{(-1)^{i-1} (i-1)!}{x^i}, \end{equation*} hence, by substituting back and simplifying, we get \begin{equation*} \begin{split} \left(\log_{x}{a}\right)^{(n)} &=n!(\log_{x}{a})\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}}}{(\ln x)^{\sum{y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{(-1)^{i-1}}{i x^i}\right]^{y_{k,i}}}} \\ &=\frac{(-1)^{n} n!}{x^n}(\log_{x}{a})\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{1}{(\ln x)^{\sum{y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{1}{i}\right]^{y_{k,i}}}}. \end{split} \end{equation*} Replacing the multinomial coefficient by its factorial definition, we obtain this proposition.

Another special \(n\)-th derivative is that of \(\ln{v(x)}\).

Proposition 7. The \(n\)-th derivative of \(\ln v(x)\) is given by \begin{equation*} \begin{split} (\ln v)^{(n)} &=n!\sum_{\sum{iy_{k,i}}=n}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n}}\frac{(-1)^{\sum{y_{k,i}}-1}}{(\sum{y_{k,i}})!v^{\sum{y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}} \\ &=n!\sum_{\sum{iy_{k,i}}=n}{\frac{(\sum{y_{k,i}}-1)!(-1)^{\sum{y_{k,i}}-1}}{v^{\sum{y_{k,i}}}}\prod_{i=1}^{n}{\frac{1}{y_{k,i}!}\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}} .\end{split} \end{equation*}

Proof. From Theorem 2, we have \begin{equation*} \begin{split} (\ln v)^{(n)} &=\left(\frac{v'}{v}\right)^{(n-1)} \\ &=(n-1)!\sum_{\ell=0}^{n-1}{\frac{(v')^{(\ell)}}{\ell!}\sum_{\sum{iy_{k,i}}=n-\ell-1}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n-\ell-1}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n-\ell-1}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} \\ &=(n-1)!\sum_{\ell=0}^{n-1}{\frac{v^{(\ell+1)}(\ell+1)}{(\ell+1)!}\sum_{\sum{iy_{k,i}}=n-\ell-1}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n-\ell-1}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n-\ell-1}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} \\ &=(n-1)!\sum_{\ell=1}^{n}{\frac{v^{(\ell)}}{\ell!}\ell\sum_{\sum{iy_{k,i}}=n-\ell}{\binom{\sum{y_{k,i}}}{y_{k,1}, \ldots, y_{k,n-\ell}}\frac{(-1)^{\sum{y_{k,i}}}}{v^{\sum{y_{k,i}}+1}}\prod_{i=1}^{n-\ell}{\left[\frac{v^{(i)}}{i!}\right]^{y_{k,i}}}}} .\end{split} \end{equation*} Similar to what was done in the proof of Theorem 1, we defined an extension \((y_{k,1}, \cdots, y_{k,n})\) of each partition \((y_{k,1}, \cdots, y_{k,n-\ell})\) such that \(y_{k,n-\ell+1}=\cdots=y_{k,n}=0\). Now, for every \(\ell\in[1,n]\), let us associate with each partition \((y_{k,1}, \ldots, y_{k,n})\), the partition \((Y_{k,1}, \ldots, Y_{k,n})\) such that \begin{equation*} \begin{cases} Y_{k,i}=y_{k,i}+1, & \text{for } i=\ell, \\ Y_{k,i}=y_{k,i}, & \text{otherwise}. \end{cases} \end{equation*} Notice that \(\sum{Y_{k,i}}=\sum{y_{k,i}}+1\) and that \(\sum{iY_{k,i}}=n\). Hence, we can write \begin{equation*} \begin{split} (\ln v)^{(n)} &=(n-1)!\sum_{\ell=1}^{n}{\ell\sum_{\sum{iY_{k,i}}=n}{\binom{\sum{Y_{k,i}-1}}{Y_{k,1}, \ldots, Y_{k,\ell}-1, \ldots, Y_{k,n}}\frac{(-1)^{\sum{Y_{k,i}}-1}}{v^{\sum{Y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}}} \\ &=(n-1)!\sum_{\sum{iY_{k,i}}=n}{\frac{(-1)^{\sum{Y_{k,i}}-1}}{v^{\sum{Y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}}\sum_{\ell=1}^{n}{\ell\binom{\sum{Y_{k,i}-1}}{Y_{k,1}, \ldots, Y_{k,\ell}-1, \ldots, Y_{k,n}}} \\ &=(n-1)!\sum_{\sum{iY_{k,i}}=n}{\binom{\sum{Y_{k,i}}}{Y_{k,1}, \ldots, Y_{k,n}}\frac{(-1)^{\sum{Y_{k,i}}-1}}{(\sum{Y_{k,i}})!v^{\sum{Y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}}\sum_{\ell=1}^{n}{\ell Y_{k,\ell}} \\ &=n!\sum_{\sum{iY_{k,i}}=n}{\binom{\sum{Y_{k,i}}}{Y_{k,1}, \ldots, Y_{k,n}}\frac{(-1)^{\sum{Y_{k,i}}-1}}{(\sum{Y_{k,i}})!v^{\sum{Y_{k,i}}}}\prod_{i=1}^{n}{\left[\frac{v^{(i)}}{i!}\right]^{Y_{k,i}}}} .\end{split} \end{equation*}

Acknowledgments

The authors would like to thank the referee for his/her valuable comments that resulted in the present improved version of the article.

Conflicts of Interest:

''The authors declare no conflict of interest.''

References

  1. Shieh, P., & Verghese, K. (1967). A general formula for the nth derivative of 1/f (x). The American Mathematical Monthly, 74(10), 1239-1240. [Google Scholor]
  2. Gerrish, F. (1980). 64.2 A useless formula?. The Mathematical Gazette, 64(427), 52-52. [Google Scholor]
  3. Furrer, E. M. (2008). Asymptotic behavior of a continuous approximation to the kriging weighting function. Technical Note NCARITN476+ STR. National Center for Atmospheric Research, Boulder. [Google Scholor]
  4. Barabesi, L. (2020). The computation of the probability density and distribution functions for some families of random variables by means of the Wynn-\(\rho\) accelerated Post-Widder formula. Communications in Statistics-Simulation and Computation, 49(5), 1333-1351. [Google Scholor]
  5. Liu, Y. (2014). Asymptotic moments of symmetric self-normalized sums. Scientiae Mathematicae Japonicae, 77(1), 59-67. [Google Scholor]
  6. Mahmudov, N., & Matar, M. M. (2017). Existence of mild solution for hybrid differential equations with arbitrary fractional order. TWMS Journal of Pure and Applied Mathematics, 8(2), 160-169. [Google Scholor]
  7. Rafeiro, H., & Samko, S. (2010). Characterization of the variable exponent Bessel potential spaces via the Poisson semigroup. Journal of Mathematical Analysis and Applications, 365(2), 483-497. [Google Scholor]
  8. Cao, R. (2017). Hierarchical Stochastic Modelling in Multistable Perception (Doctoral dissertation, Dissertation, Magdeburg, Universitat, 2017). [Google Scholor]
  9. Basu, R. (2021). A new formula for investigating delay integro-differential equations using the differential transform method involving a quotient of two functions. Rocky Mountain Journal of Mathematics, 51(2), 413-421. [Google Scholor]
  10. El Haddad, R. (2022). A generalization of multiple zeta values. Part 1: Recurrent sums. Notes on Number Theory and Discrete Mathematics, 28(2), 167-199. [Google Scholor]
  11. El Haddad, R. (2022). A generalization of multiple zeta values. Part 2: Multiple sums. Notes on Number Theory and Discrete Mathematics, 28(2), 200-233. [Google Scholor]
  12. Andrews, G. E. (1998). The Theory of Partitions (No. 2). Cambridge University Press. [Google Scholor]
  13. Xenophontos, C. (2021). A formula for the \(n^{th}\) derivative of the quotient of two functions. arXiv preprint arXiv:2110.09292. [Google Scholor]
  14. Qi, F. (2014). Explicit formulas for computing Bernoulli numbers of the second kind and Stirling numbers of the first kind. Filomat, 28(2), 319-327. [Google Scholor]
]]>
Global asymptotic stability of constant equilibrium point in attraction-repulsion chemotaxis model with logistic source term https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/global-asymptotic-stability-of-constant-equilibrium-point-in-attraction-repulsion-chemotaxis-model-with-logistic-source-term/ Fri, 30 Dec 2022 18:43:03 +0000 https://old.pisrt.org/?p=6947
OMA-Vol. 6 (2022), Issue 2, pp. 102 - 119 Open Access Full-Text PDF
Abdelhakam Hassan Mohammed and Ali. B. B. Almurad
Abstract:This paper deals with nonnegative solutions of the Neumann initial-boundary value problem for an attraction-repulsion chemotaxis model with logistic source term of Eq. (1) in bounded convex domains \(\Omega\subset\mathbb{R}^{n},~ n\geq1\), with smooth boundary. It is shown that if the ratio \(\frac{\mu}{\chi \alpha-\xi \gamma}\) is sufficiently large, then the unique nontrivial spatially homogeneous equilibrium given by \((u_{1},u_{2},u_{3})=(1,~\frac{\alpha}{\beta},~\frac{\gamma}{\eta})\) is globally asymptotically stable in the sense that for any choice of suitably regular nonnegative initial data \((u_{10},u_{20},u_{30})\) such that \(u_{10}\not\equiv0\), the above problem possesses uniquely determined global classical solution \((u_{1},u_{2},u_{3})\) with \((u_{1},u_{2},u_{3})|_{t=0}=(u_{10},u_{20},u_{30})\) which satisfies \(\left\|u_{1}(\cdot,t)-1\right\|_{L^{\infty}(\Omega)}\rightarrow{0},~~ \left\|u_{2}(\cdot,t)-\frac{\alpha}{\beta}\right\|_{L^{\infty}(\Omega)}\rightarrow{0},\left\|u_{3}(\cdot,t)-\frac{\gamma}{\eta}\right\|_{L^{\infty}(\Omega)}\rightarrow{0}\,,\) \(\mathrm{as}~t\rightarrow{\infty}\). ]]>

Open Journal of Mathematical Analysis

Global asymptotic stability of constant equilibrium point in attraction-repulsion chemotaxis model with logistic source term

Abdelhakam Hassan Mohammed\(^{1,2*}\0 and Ali. B. B. Almurad\(^1\)
\(^1\) Department of Mathematics and Computer, College of Education, Alsalam University, Alfula, Sudan.
\(^2\) College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, P.R. China.
Correspondence should be addressed to Abdelhakam Hassan Mohammed at bd111hakam@gmail.com

Abstract

This paper deals with nonnegative solutions of the Neumann initial-boundary value problem for an attraction-repulsion chemotaxis model with logistic source term of Eq. (1) in bounded convex domains \(\Omega\subset\mathbb{R}^{n},~ n\geq1\), with smooth boundary. It is shown that if the ratio \(\frac{\mu}{\chi \alpha-\xi \gamma}\) is sufficiently large, then the unique nontrivial spatially homogeneous equilibrium given by \((u_{1},u_{2},u_{3})=(1,~\frac{\alpha}{\beta},~\frac{\gamma}{\eta})\) is globally asymptotically stable in the sense that for any choice of suitably regular nonnegative initial data \((u_{10},u_{20},u_{30})\) such that \(u_{10}\not\equiv0\), the above problem possesses uniquely determined global classical solution \((u_{1},u_{2},u_{3})\) with \((u_{1},u_{2},u_{3})|_{t=0}=(u_{10},u_{20},u_{30})\) which satisfies \(\left\|u_{1}(\cdot,t)-1\right\|_{L^{\infty}(\Omega)}\rightarrow{0},~~
\left\|u_{2}(\cdot,t)-\frac{\alpha}{\beta}\right\|_{L^{\infty}(\Omega)}\rightarrow{0},\left\|u_{3}(\cdot,t)-\frac{\gamma}{\eta}\right\|_{L^{\infty}(\Omega)}\rightarrow{0}\,,\) \(\mathrm{as}~t\rightarrow{\infty}\).

Keywords:

Keller-Segel model; Logistic source; Chemotaxis; Attraction-Repulsion; Asymptotic Stability.
]]>
Forecasting the democratic republic of the Congo macroeconomic data with the Bayesian vector autoregressive models https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/forecasting-the-democratic-republic-of-the-congo-macroeconomic-data-with-the-bayesian-vector-autoregressive-models/ Fri, 30 Dec 2022 18:34:53 +0000 https://old.pisrt.org/?p=6945
OMA-Vol. 6 (2022), Issue 2, pp. 93 - 101 Open Access Full-Text PDF
Lewis N. K. Mambo, Victor G. Musa and Gabriel M. Kalonda
Abstract:The purpose of this paper is to emphasize the role of the Bayesian Vector Autoregressive models (VAR) in macroeconomic analysis and forecasting. To help the policy-makers to do better, the Bayesian VAR models are considered more robust and valuable because they put in the model the mathematician's beliefs or priors and the data. By using BVAR(1), we get the main results: (i)the best out sample point forecasts; (ii) the exchange rate shock contributes more to inflation; (iii) the inflation shock has high effects on exchange rate innovation. These results are due to the dollarization of this small open economy. ]]>

Open Journal of Mathematical Analysis

Forecasting the democratic republic of the Congo macroeconomic data with the Bayesian vector autoregressive models

Lewis N. K. Mambo\(^{1,*}\), Victor G. Musa\(^{1}\) and Gabriel M. Kalonda\(^1\)
\(^1\) Central Bank of Congo, University of Kinshasa, Democratic Republic of the Congo.
Correspondence should be addressed to Lewis N. K. Mambo at lewismambo2@gmail.com

Abstract

The purpose of this paper is to emphasize the role of the Bayesian Vector Autoregressive models (VAR) in macroeconomic analysis and forecasting. To help the policy-makers to do better, the Bayesian VAR models are considered more robust and valuable because they put in the model the mathematician’s beliefs or priors and the data. By using BVAR(1), we get the main results: (i)the best out sample point forecasts; (ii) the exchange rate shock contributes more to inflation; (iii) the inflation shock has high effects on exchange rate innovation. These results are due to the dollarization of this small open economy.

Keywords:

Bayesian parameter estimation; Forecasting; Democratic republic of the Congo; Macroeconometric modelling; Uncertainties; Vector autoregressive processes.

1. Introduction

Forecasting is one of the objectives of multiple time series analysis [1]. In the considerable time series analysis, we often used the Vector Autoregressive models, so-called, in short, VAR models. They are one of the most successful statistical modeling ideas that have come up in the last forty years. The use of Bayesian methods makes the VAR models generic enough to handle a variety of complex real-world time series [1, 2, 3, 4]. Moreover, the VAR models consider the time series' dynamic behaviors. In economics, these dynamic behavior have many interesting contributions to the development of the theory, like rational expectations, causality, correlation, persistence, the cointegration of the macroeconomic variables, the convergence of economies, etc.

Bayesian statistics provides a rational theory of personal beliefs compounded with real-world data in uncertainty. In the last three decades, Bayesian Statistics has emerged as one of the leading paradigms in which all of this can be done in a unified fashion. As a result, there has been tremendous development in Bayesian theory, methodology, computation, and applications in the past several years [5, 6]. The appreciation of the potential for Bayesian methods is growing fast both inside and outside the econometrics community. The first encounter with Bayesian ideas by many people simply entails discovering that a particular Bayesian method is superior to classical statistical methods in a specific problem or question. Nothing succeeds like success, and this observed superiority often leads to the further pursuit of Bayesian analysis.

For scientists with little or no formal statistical background, Bayesian methods are being discovered as the only viable method for approaching their problems. Unfortunately, for many of them, statistics has become synonymous with Bayesian statistics. Bayesian Vector Autoregressive models are one of the most successful statistical modelling ideas that have come up in the last four decades. The use of Bayesian methods makes the models generic enough to handle a variety of complex real-world time series.

The purpose of Bayesian inference is to provide mathematical machinery that can be used for modeling systems, where the uncertainties of the system are taken into account and the decisions are made according to rational principles [5, 6, 7, 8]. The Bayesian method, as many might think, is not new but a method that is older than many of the commonly known and well-formulated statistical techniques. The basis for Bayesian statistics was laid down in a revolutionary paper written by British mathematician and Reverend Thomas Bayes (1702 - 1761), which appeared in print in 1763 but was not acknowledged for its significance [9, 10, 11].

The rest of this paper is organized as follows. S2 presents the Bayesian vector autoregressive model. The S3 gives the empirical results. The S4 gives the asymptic properties of Bayesian methods for stochastic differential equation models.

2. Bayesian vector autoregressive model

Vector Autoregressions (VARs) are linear multivariate time-series models that capture the joint dynamics of multiple time series. As mentioned by Tsay, the most commonly used multivariate time series model is the vector autoregressive (VAR) model, particularly in the econometric literature, for good reasons. First, the model is relatively easy to estimate. One can use the least-squares (LS), the maximum likelihood (ML), or the Bayesian method. All three estimation methods have closed-form solutions. For a VAR model, the least-squares estimates are asymptotically equivalent to the ML estimates. The ordinary least-squares (OLS) estimates are the same as the generalized least-squares (GLS) estimates. Second, the properties of VAR models have been studied extensively in the literature. Finally, VAR models are similar to the multivariate multiple linear regressions widely used in multivariate statistical analysis. Many methods for making inferences in multivariate multiple linear regression applied to the VAR model [12].

The pioneering work of Sims [13] proposed to replace the large-scale macroeconomic models popular in the 1960s with VARs and suggested that Bayesian methods could have improved upon frequentist ones in estimating the model coefficients. As a result, bayesian methods are increasingly becoming attractive to researchers in many fields, such as Econometrics [14]. Bayesian VARs (BVARs) with macroeconomic variables were first employed in forecasting by Litterman[15] and Doan, Litterman, and Sims[16]. Still, now it is one of the most macro-econometric tools routinely used by scholars and policymakers for structural analysis and scenario analysis in an ever-growing number of applications.

Suppose \(X_t\) a vector of \(n\times n\) is a stationary Gaussian VAR(1) process of the form
\begin{align} X_t = \Pi X_{t-1} + U_t, \;\; U_t \sim N(0_{n \times n},\Omega_{n \times n})\,, \label{VAR2} \end{align}
(1)
where \(\Pi_{n \times n}\) denotes the matrix of coefficients, \(U_t\) is a vector of innovations of \(n \times 1\) and the prior distribution for \(\Theta := vec(\Pi)\) is a multivariate normal with known mean \(\Theta^{*}\) and covariance matrix \(\Omega_{\Theta}\). For the reasons of simplicity and practice, stationary, stable VAR(1) process has been considered. As well known in time series literature, a process is stationary if it has time invariant first and second moments. Since \(X_t\) follows a VAR(1) model, the condition for its stationarity is that all solutions of the determinant equation \(\mid I_{kp} - \Phi B \mid = 0\) must be greater that 1 in modulus or they are outside the unit circle [1, 2, 17]. The multivariate time series \(X_t\) follows a vector autoregressive model of order p, VAR(p), that is a generalization of a vector autoregressive model of order 1, VAR(1), if
\begin{align} X_t = \Phi_0 + \sum_{i=1}^p \Phi_i X_{t-i} + \epsilon_t\,, \end{align}
(2)
where \(\Phi_0\) is a k-dimensional constant vector and \(\Phi_i\) are \(k \times k\) matrices for \(i > 0\); \(\Phi_p \neq 0\), and \(\epsilon_t\) is a sequence of independent and identically distributed \((i.i.d.)\) random vectors with mean zero and covariance matrix \(\Sigma_\epsilon\), which is positive - definite. In econometric analysis and multivariate time series analysis, the useful tools used by policymakers are the forecast error variance decomposition and the impulse response functions. The MA representation of the VAR(p) model
\begin{align}\label{VAR1} X_t = \mu + \sum_{i=1}^{\infty}\Psi_i \epsilon_{t-i}\,. \end{align}
(3)
We can express a VAR(p) model in a VAR(1) form by using an expanded series. Define \(Y_t =(X_t^{'},X_{t-1}^{'},...,X_{t-p+1}^{'})^{'}\), which is \(p k\)- dimensional time series. The VAR(p) in Eq. 5 can be written as
\begin{align} Y_t = \Xi Y_{t-1} + \Lambda_t\,, \end{align}
(4)
where \(\Lambda_t= (U_t^{'},0^{'}) \) with \(0\) being a \(k(p-1)\) - dimensional zero vector, and
\begin{align} \Xi = \begin{bmatrix} \psi_{1} & \psi_{2} &...& \psi_{p-1} & \psi_{p} \\ I & 0 &...& 0 & 0\\ 0 & I&...& 0 & 0\\....&...&...&...&...\\ 0 & 0 &...& I & 0 \end{bmatrix}, \end{align}
(5)
where it is understood that I and 0 are the \(k \times k\) identity and zero matrix, respectively. The matrix \(\Xi\) is called the companion matrix of the matrix polynomial \(\Psi(B) = I_k - \Psi_1B -...- \Psi_pB_p\). The covariance matrix of \(\Lambda_t\) has a special structure; all of its elements are zero except those in the upper-left corner that is \(\Sigma_\epsilon\). Also, the necessary and sufficient condition for the weak stationarity of the VAR(p) series is that all solutions of the determinant equation \( \mid\Psi(B) \mid = 0\) must be greater than 1 in modulus.

2.1. Bayesian estimation methods

As mentioned in [1,3, 18], in the Bayesian approach, it is assumed that the non sample or prior information is available in the form of a density. Denoting the parameters of interest by \(\Theta\), let us assume that the prior information is summarized in the prior probability density function (p.d.f.) \(g(\Theta)\). The sample information is summarized in the sample, say \(f(y \mid \Theta)\), which is algebraically identical to the likelihood function \(\mathcal{L}(\Theta \mid X)\). The two types of information are combined via Bayes' theorem which states that
\begin{align} g(\Theta \mid X) =\frac{f(X \mid \Theta) g (\Theta)}{f(X)}\,, \end{align}
(6)
where \(f(X)\) denotes the unconditional sample density which, for a given sample, is just a normalizing constant. In other words the distribution of \(\Theta\), given the sample information contained in \(X\), can be summarized by \(g(\Theta \mid X)\). This function is proportional to the likelihood function times the prior density \(g(\Theta)\),
\begin{align} g(\Theta \mid X) \propto f(X \mid \Theta ) g (\Theta) = \mathcal{L}(\Theta \mid X) g (\Theta)\,. \end{align}
(7)
The conditional density \(g(\Theta \mid X)\) is the posterior p.d.f.. It contains all the information available on the parameter vector \(\Theta\). Point estimators of \(\Theta\) may be derived from the posterior distribution. That is,
\begin{align} \text{posterior distribution} \propto \text{likelihood} \times \text{prior distribution.} \end{align}
(8)
The normal priors for the parameters of a Gaussian VAR model, \(\Theta :=vec(A)=vec(A_1,...,A_p)\) is a multivariate normal with known mean \(\Theta^*\) and covariance matrix \(\Omega_\theta\),
\begin{align} g(\Theta) = \Big(\frac{1}{2\pi}\Big)^{K^2p/2} \mid \Omega_\theta \mid^{-1/2} exp\Big[-\frac{1}{2} (\Theta -\Theta^{*})^{'} \Omega_\theta^{-1}(\Theta -\Theta^{*})\Big]. \end{align}
(9)
The Gaussian likelihood function
\begin{equation} g(\Theta) = \Big(\frac{1}{2\pi}\Big)^{KT/2} \mid I_T \otimes \Sigma_u \mid^{-1/2}exp\Big[-\frac{1}{2} (X - (Z^{'}\otimes I_T)\Theta)^{'} (I_T \otimes \Sigma_u^{-1})(X - (Z^{'}\otimes I_T)\Theta)\Big]. \end{equation}
(10)
Combining the prior information with the sample information summarized in the Gaussian likelihood function gives the posterior density
\begin{align} g(\Theta) \propto& \mathcal{L}(\Theta \mid X) g (\Theta)\propto exp\Big\{-\frac{1}{2} \big[ (\Omega_\theta^{-1/2} (\Theta -\Theta^{*})^{'} (\Omega_\theta^{-1/2} (\Theta -\Theta^{*})\big]\nonumber\\ &+ \big\{ (I_T \otimes \Sigma_u^{-1})X - (Z^{'}\otimes \Sigma_u^{-1/2})\Theta\big\}^{'} \big\{ (I_T \otimes \Sigma_u^{-1})X - (Z^{'}\otimes \Sigma_u^{-1/2})\Theta\big\}\big]\Big\}.\label{pd} \end{align}
(11)
Here \(\Omega_\theta^{-1/2} \) and \(\Sigma_u^{-1/2}\) denote the symmetric square root matrices of \(\Omega_\theta^{-1} \) and \(\Sigma_u^{-1}\), respectively. The white noise covariance matrix \(\Sigma_u\) is assumed to be known for the moment. Defining \(w^{'}:= \big[\Omega_\theta^{-1/2}\Theta^{*} \;\;\; (I_T\otimes \Sigma_u^{-1})X\big]^{'}\) and \( W^{'}:= \big[\Omega_\theta^{-1/2} \; \;\;\; Z^{'}\otimes \Sigma_u^{-1}\big]^{'}\), the exponent in (11) can be rewritten as \begin{align*} & -\frac{1}{2}(w - W\Theta)^{'}(w - W\Theta)=-\frac{1}{2}\Big[(\Theta - \bar{\Theta})^{'}W^{'}W (\Theta - \bar{\Theta}) + (w - W\bar{\Theta})^{'}(w - W\bar{\Theta} )\Big], \end{align*} where
\begin{align} \bar{\Theta} := (W^{'} W)^{'}W w = [\Omega_{\theta}^{-1} + (Z Z^{'}\otimes \Sigma_u^{-1})]^{-1} [\Omega_{\theta}^{-1} \Theta^{*} + ( Z^{'}\otimes \Sigma_u^{-1})X]\,. \end{align}
(12)
The final values of the parameters obtained in the computation are called the posterior mean of VAR(1) coefficients estimated by using Minnesota.

3. Forecasting methods

According to literature, we have many methods of forecasting. But here we are talking about two of them suck as point forecasts and interval forecasts. For point Forecast, let a general BVAR(p) process with zero mean,
\begin{align} X_t = \Phi_0 + \sum_{i=1}^p \Phi_i X_{t-i} + \epsilon_t \end{align}
(13)
has a BVAR(1)
\begin{align} Y_t = \Pi Y_{t-1} + U_t, \;\; U_t \sim N(0_{n \times n},\Omega_{n \times n}), \label{VAR6} \end{align}
(14)
the optimal predictor of \(Y_{t + h}\) can be seen
\begin{align} Y_t(h) = \Pi^h Y_{t-1} = \Pi Y_{t}(h -1). \end{align}
(15)
For interval forecasts, assume the BVAR process and the forecast errors are the Gaussian processes. A \((1-\alpha) 100 %\), interval forecasts, \(h\) periods ahead, for the \(k-th\) component of \(y_t\) is
\begin{align} X_{k,t}(h) \pm z_{(\alpha/2)} \sigma_{k}\,, \end{align}
(16)
where \(\sigma_{k}\) is the square root of the \(k-th\) diagonal element of \(\Omega_\varepsilon\). Fr example, if the distribution is normal for \(95 %\) of confidence, statistic Z - Score equals to 1,96, that is, \(z_{(\alpha/2)}=1.96\).

3.1. Forecast error variance decomposition

As presented in [17], by using the MA representation of a VAR(p) model in Equation 0000 and the fact that \(Cov(\eta_t)=I_k\), we see that the \(l - step\) ahead error of \(Z_{h+l}\) at the forecast origin \(t=h\) can be written as
\begin{align} e_h(l)=\psi_0 \eta_{h+l} + \psi_1 \eta_{h+l-1} +...+\psi_{l-1} \eta_{h+l}, \end{align}
(17)
and the covariance matrix of the forecast error is
\begin{align} Cov[e_h(l)]=\sum_{v=0}^{l-1}\psi_\upsilon \psi_\upsilon^{'}\,. \label{FEVD2} \end{align}
(18)
From Eq. (18), the variance of the forecast error \(e_{h,i}(l)\), which is the \(i th\) component of \(e_h(l)\) is
\begin{align} Var[e_{h,i}(l)]=\sum_{v=0}^{l-1}\sum_{j=1}^k\psi_{\upsilon,ij}^2 =\sum_{j=1}^k\sum_{v=0}^{l-1}\psi_{\upsilon,ij}^2 \,. \label{FEVD3} \end{align}
(19)
Using Eq. (19), we define
\begin{align} \omega_{ij} (l)= \sum_{v=0}^{l-1}\psi_{\upsilon,ij}^2, \end{align}
(20)
and obtain
\begin{align} Var[e_{h,i}(l)]=\sum_{j=1}^k\omega_{ij} (l). \label{FEVD4} \end{align}
(21)
Therefore, the quantity \(w_{ij}(l)\) can be interpreted as the contribution of the jth shock \(\eta_{jt}\) to the variance of the \(l-step\) ahead forecast error of \(Z_{it}\). Eq. (21) is referred to as the forecast error decomposition. In particular, \(\omega_{ij}(l)/Var[e_{h,i}(l)]\) is the percentage of contribution from the shock \(\eta_{jt}\).

3.2. Impulse response functions

Suppose that the bivariate time series \(z_t\) consists of monthly inflation and exchange rate growth; one might be interested in knowing the effect on the monthly inflation rate if the monthly exchange rate growth is increased or decreased by one. This type of study is referred to as the impulse response function in the statistical literature and the multiplier analysis in the econometric literature. The coefficient matrix \(\Psi_i\) of the MA representation of a VAR(p) model referred to as the coefficients of impulse response functions. The summation \(\Phi_n =\sum_{i=0}^n \Psi_i\) denotes the accumulated responses over n periods to a unit shock to \(Z_t\). From the MA representation of \(Z_t\) and using the Cholesky decomposition of \(\Sigma_\varepsilon\), we have
\begin{align} Z_t =[\Phi_0 + \Phi_1 B + \Phi_2 B^2 + ....]\eta_t\,, \end{align}
(22)
where \(\Phi_l = \Psi_l U^{'}\), \(\Sigma_\varepsilon = U^{'} U\), and \(\eta_t = (U^{'})^{-1}\varepsilon_t\) for \(l \geq 0\). Thus, components of \(\eta_t\) are uncorrelated and have unit variance. The total accumulated responses for all future periods are defined as \(\Phi_\infty =\sum_{i=0}^{\infty} \Psi_i\) and called the total multipliers or long - run effects.

4. Empirical results

The Bayesian vector autoregressive models are often used in any scientific field where forecasting is used to lead the policy analysis. For example, the BVAR models are considered must macroeconometric analysis tools in macroeconomics.

4.1. Economic intuitions behind the VAR(1) model

Macroeconometric modeling increased in importance during the late I950S and the I960s, achieving a very influential role in macroeconomic policy-making during the I970s. Its failure to deliver the precise economic control it had seemed to promise then led to a barrage of attacks, ranging from disillusion and skepticism on the part of policymakers to detailed and well-argued academic criticism of the basic methodology of the approach.

The most powerful and influential of these academic arguments came from Sims (I980) in his article 'Macroeconomics and Reality. On three quite separate grounds, Sims argued against the basic process of model identification, which lies at the heart of the Cowles Commission methodology. First, most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest and would agree further that fluctuations in these series are interrelated [13]. Thus, the weakness of BVAR models is these models do not have little any theory. In time series analysis or econometrics, they are well-known as "Black-Box models."

4.2. Data analysis

To illustrate the Bayesian VAR(1) model using some informative priors such as Minnesota. We use monthly data from the Democratic Republic of the Congo data set on inflation rate \(\pi_t\), the change of exchange rate \(e_t\), money growth \(m_t\), and the evolution of the cooper price \(h_t\). The sample runs from January 2004 to September 2018. These four variables are commonly used in the D.R.C's forecasting. We put copper prices here because the mining sector dominates this economy. The D.R.C. is the fourth Copper producer country in the World. Therefore, any change in price in the international market affects positively or negatively the macroeconomic stability. This summary statistics is given in Table 1.
Table 1. Summary statistics.
\(\pi_t\) \(e_t\) \(m_t\) \(h_t\)
Mean 0.0126 0.0083 0.0191 0.0052
Median 0.0055 -0.0022 0.0161 0.0074
Max 0.1139 0.1054 0.1758 0.2308
Min -0.0746 -0.0970 -0.1177 -0.3501
Std Dev 0.0206 0.0248 0.0521 0.0719
Skewness 1.5536 0.6023 0.2024 -0.7503
Kurtosis 9.3533 6.8723 3.1228 7.7440
Jarque - Bera Stat 366.8031 120.6023 1.3117 181.5513
Prob (JB) 0.000 0.0000 0.5190 0.0000
Sum 2.2170 1.4601 3.3662 0.9084
Observations 176 176 176 176
Using the software Eviews 11 and the data, the maximum lag of BVAR model is 1. Therefore, the forecasting and structural analysis will be done with the BVAR(1) model. These are the popular information criterion for the selection of the model. AIC: Akaike information Criterion, SC: Schwarz Information Criterion, HQ: Hannan - Quinn Information Criterion, and JB: Jarque - Bera. According to these empirical results, the optimal lag of our model is 1. Therefore, we will estimate the VAR(1)process using the Bayesian estimation method. The selection of lag lengh is presented in Table 2.
Table 2. Selection of lag lengh.
Lag log L AIC SC HQ
0 1280.9 -15.20 -15.13 -15.17
1 1370.7 \(-16.08^{*}\) \(-15.71^{*}\) \( -15.93^{*}\)
2 1379.7 -16.00 -15.33 -15.72
3 1387.67 -15.90 -14.93 -15.50
4 1394.73 -15.79 -14.53 -15.28
(*) indicates the calculated optimal lag of the VAR(p) model.

4.3. Estimated posterior mean coefficients of VAR

To illustrate this approach in simple way, our Bayesian VAR(1) model takes this matrix form,
\begin{align} \left[\begin{array}{c} \pi_{t} \\ e_{t}\\ m_{t} \\ h_{t}\end{array}\right]= \begin{bmatrix} \psi_{11} & \psi_{12} & \psi_{13} & \psi_{14} \\ \psi_{21} & \psi_{22} & \psi_{23} & \psi_{24} \\ \psi_{31} & \psi_{32} & \psi_{33} & \psi_{34} \\ \psi_{41} & \psi_{42} & \psi_{43} & \psi_{44} \end{bmatrix} \times \left[\begin{array}{c} \pi_{t-1} \\ e_{t-1}\\ m_{t-1} \\ h_{t-1}\end{array}\right] + \left[\begin{array}{c} u_{1t} \\ u_{2t}\\ u_{3t} \\ u_{4t}\end{array}\right]\,, \end{align}
(23)
where \(\pi_t\), \(m_t\),\(e_t\), and \(h_t\) denote the monthly CPI inflation rate, the change of exchange rate, money growth, and the change of the cooper price, respectively. \(\{u_{it}, i=1,2,3,4\}\) denote the shock of the economic polices. In literature, they are so-called the "innovations". Assume \(u_{it}\) are the Gaussian white processes.
Table 1. BVAR coefficients for litterman/minnesota prior.
\(\pi_t\) \(e_t\) \(m_t\) \(h_t\)
\(\pi_{t-1}\) 0.3168 0.1597 0.3103 0.1643
\(e_{t-1}\) 0.2863 0.2593 0.0682 0.2301
\(m_{t-1}\) 0.0616 0.0794 -0.1410 0.0248
\(h_{t-1}\) 0.0167 -0.0451 -0.0428 0.2792
With VARs, the parameters themselves are rarely of direct interest because the fact that there are so many of them makes it hard for the reader to interpret the table of VAR coefficients. However, Table 3 presents posterior means of all the VAR coefficients for Minnesota denotes student statistics used for the statistical significance of parameters.

4.4. The congolese macroeconomic forecasts

The methods of Forecasting and nowcasting the economy are risky, often humbling tasks. But, unfortunately, they are the jobs that many statisticians, economists, and others are required to engage in as mentioned in many papers [15, 18]. Nowadays, VARs models have become powerful Forecasting tools in many Central Banks and other intuitions. The outputs from Bayesian VAR models seem more accurate and robust than those of different approaches. The congolese macroeconomic forecasts from October 2018 to March 2019 is given in Table 4.
Table 1. The congolese macroeconomic forecasts: Oct. 2018 - March 2019.
Inflation Exchange rate money growth Cooper price
October2018 0.0126 0.0083 0.0191 0.0052
November 2018 0.0055 -0.0022 0.0161 0.0074
December 2018 0.1139 0.1054 0.1758 0.2308
January 2019 -0.0746 -0.0970 -0.1177 -0.3501
February 2019 0.0206 0.0248 0.0521 0.0719
March 2019 1.5536 0.6023 0.2024 -0.7503

4.5. Forecast error variance decompositions

In time series analysis, the Bayesian VAR models attract the interest of many researchers in all real-world fields such as Economics, Finance, Geoscience, Physics, Biology, etc. [9, 17, 18,19]. Indeed, the VAR models are used not only for forecasting and nowcasting but also as policy analysis tools by using impulse response functions and variance decomposition [13, 20].

The variance decomposition of inflation shows that 82 % of its innovations are due to itself innovations, and 13 %, 6 %, and 41 points of the percentage are due to exchange rate, money, and cooper price index innovations. With 13 % the exchange rate contributes more to the CPI inflation. The variance decomposition of the exchange rate shows that 79 % of its innovations are due to itself innovations and 15 %, 3 %, and 2 points of the percentage are due to inflation rate, money, and cooper price index innovations. The variance decomposition of money shows that 97 % of its innovations are due to itself innovations and 2 % , 1 % , and 27 points of percentage are due to inflation, exchange rate, and cooper price index innovations.

For policy-makers, these results show a close connexion between the inflation rate and exchange rate because of this economy's dollarization and the economy's extraversion. This monetary phenomenon dated since the 1990s, when the Congolese economy fell down by the political and socio-economic crises and army conflicts of the early 2000s.

5. Conclusion

This study presents the Bayesian vector autoregressive models and applies them to forecast the D.R.C.'s macroeconomic data. The Bayesian vector autoregressive models are intensively used in the macro econometric analysis to highlight the policy-making process by improving the structural analysis and the forecasts. The Bayesian vector autoregressive models are thoroughly more used in macroeconomics because they can put together uncertainties inherent to real-world economic problems. In this work, first, we give the mathematical and statistical foundations of the BVAR models, and last, we use these models for policy-making. We get two main results using BVAR model tools and macroeconomic data. First, there is a close relationship between the inflation rate and exchange rate change in the Democratic Republic of the Congo. The use of U.S. money can explain this result, that is, the dollar in the different transactions inside the country's so-called "dollarization" economy, and the D.R.C.'s economy is a small and open economy. Secondly, the weak effects of money growth in inflation. This result is close to the paradigm of "money neutrality in the short run." This means that in the short term, money growth does not affect inflation. According to the Quantity Theory of Money (Q.T.M.), the changes in the price level are related to the change in money. Thirdly, the copper price shocks affect more exchange rate. And finally, the BVAR models give the best forecasts among other V.A.R. models.

References

  1. Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Springer Science & Business Media.[Google Scholor]
  2. Hamilton, J. D. (2020). Time Series Analysis. Princeton University Press.[Google Scholor]
  3. Lütkepohl, H., & Krätzig, M. (Eds.). (2004). Applied time series econometrics. Cambridge university press.[Google Scholor]
  4. Tsay, R. S. (2005). Analysis of Financial Time Series. John Wiley & Sons.[Google Scholor]
  5. Robert, C. P. (2007). The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation (Vol. 2). New York: Springer.[Google Scholor]
  6. Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (1995). Bayesian Data Analysis. Chapman and Hall/CRC.[Google Scholor]
  7. Sarkka, S., & Nummenmaa, A. (2009). Recursive noise adaptive Kalman filtering by variational Bayesian approximations. IEEE Transactions on Automatic Control, 54(3), 596-600.[Google Scholor]
  8. Gelman, A., & Meng, X. L. (Eds.). (2004). Applied Bayesian Modeling and Causal Inference from Incomplete-Data Perspectives. John Wiley & Sons.[Google Scholor]
  9. Berger, J. O. (2013). Statistical Decision Theory and Bayesian Analysis. Springer Science & Business Media.[Google Scholor]
  10. Foata, D., & Fuchs, A. (2002). Processus Stochastiques: Processus de Poisson, Chaînes de Markov et Martingales. Dunod..[Google Scholor]
  11. Takeshi, A., & Amemiya, T. A. (1985). Advanced Econometrics. Harvard University Press.[Google Scholor]
  12. Albis, M. L. F., & Mapa, D. S. (2017). Bayesian averaging of classical estimates in asymmetric vector autoregressive models. Communications in Statistics-Simulation and Computation, 46(3), 1760-1770.[Google Scholor]
  13. Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48(1), 1-48.[Google Scholor]
  14. Koop, G. (2003). Bayesian Econometrics. John Wiley & Sons.[Google Scholor]
  15. Litterman, R. B. (1986). Forecasting with Bayesian vector autoregressions-five years of experience. Journal of Business & Economic Statistics, 4(1), 25-38.[Google Scholor]
  16. Doan, T., Litterman, R., & Sims, C. (1984). Forecasting and conditional projection using realistic prior distributions. Econometric Reviews, 3(1), 1-100.[Google Scholor]
  17. Tsay, R. S. (2013). Multivariate Time Series Analysis: with \(\mathbb R\) and Financial Applications. John Wiley & Sons.[Google Scholor]
  18. Koop, G., & Korobilis, D. (2013). Large time-varying parameter VARs. Journal of Econometrics, 177(2), 185-198.[Google Scholor]
  19. Chan, J., & Tobias, J. L. (2021). Bayesian Econometrics Methods. In Handbook of Labor, Human Resources and Population Economics (pp. 1-22). Cham: Springer International Publishing.[Google Scholor]
  20. Litterman1986a} Doan, T., Litterman, R., & Sims, C. (1984). Forecasting and conditional projection using realistic prior distributions. Econometric Reviews, 3(1), 1-100.[Google Scholor]
]]>
Estimation to the number of limit cycles for generalized Kukles differential system https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/estimation-to-the-number-of-limit-cycles-for-generalized-kukles-differential-system/ Fri, 30 Dec 2022 18:25:33 +0000 https://old.pisrt.org/?p=6943
OMA-Vol. 6 (2022), Issue 2, pp. 74 - 92 Open Access Full-Text PDF
Houdeifa Melki and Amar Makhlouf
Abstract:This article considers the limit cycles of a class of Kukles polynomial differential systems of the form Eq. (5). We obtain the maximum number of limit cycles that bifurcate from the periodic orbits of a linear center \(\dot{x}=y, \dot{y}=-x,\) by using the averaging theory of first and second order. ]]>

Open Journal of Mathematical Analysis

Estimation to the number of limit cycles for generalized Kukles differential system

Houdeifa Melki\(^{1,*}\) and Amar Makhlouf\(^2\)
\(^1\) Department of Mathematics, University Mostefa Benboulaid Batna 2, Batna, Algeria.
\(^2\) Department of Mathematics, University of Annaba, Laboratory LMA, P.O.Box 12, Annaba 23000, Algeria.
Correspondence should be addressed to Houdeifa Melki at h.melki@univ-batna2.dz

Abstract

This article considers the limit cycles of a class of Kukles polynomial differential systems of the form Eq. (5). We obtain the maximum number of limit cycles that bifurcate from the periodic orbits of a linear center \(\dot{x}=y, \dot{y}=-x,\) by using the averaging theory of first and second order.

Keywords:

Limit cycle; Averaging theory; Kukles systems.

1. Introduction

The study of limit cycles, which are isolated periodic orbits in the set of solutions of differential equations, is one of the main problems in the theory of differential equations. It is done by checking their existence, number, and stability. Many mathematicians, physicists, chemists, biologists, and others were interested in knowing and discovering those properties related to the limit cycles. The origin or the motivation of limit cycles emerged from the second part of the \(16^{th}\) Hilbert problem [1], which involves finding the maximum number of limit cycles of polynomial vector fields with fixed degrees.

There are several methods exist to study the number of limit cycles that bifurcate from the periodic orbits such as the abelian integral method [2], the integrating factor [3], the Poincar\'{e} return map [4], Poincar\'{e}-Melnikov integral method [5] and averaging theory [6, 7]. The study of limit cycles for differential equations or planar differential systems by applying the averaging method has been considered by several authors see, for instance, [8, 9, 10, 11].

Here we consider a particular case of the \(16^{th}\) Hilbert problem to study the upper bound of the generalized polynomial Kukles system,
\begin{equation}\label{eq1} \begin{cases} \dot{x}=-y,\\ \dot{y}=Q(x,y), \end{cases} \end{equation}
(1)
where \(Q(x,y)\) is a polynomial with real coefficients of degree \(n\). In [12], Kukles introduced the following differential system
\begin{equation}\label{eq2} \left\{ \begin{array}{l} \overset{\cdot }{x}=-y \\ \overset{\cdot }{y}=x+a_{1}x^{2}+a_{2}xy+a_{3}y^{2}+a_{4}x^{3}+a_{5}x^{2}y+a_{6}xy^{2}+a_{7}y^{3},% \end{array}% \right. \end{equation}
(2)
and he gave the necessary and sufficient conditions under which the system (2) has a center at the origin. In [13], it was solved the center problem for system (2) when \(a_{2} = 0\) and it was proved that at most six limit cycles bifurcate from the origin. In [14], Sadovskii solved the center-focus problem for system (2) with \(a_{2}a_{7}\neq0\) and proved that systems (2) can have seven limit cycles bifurcate from the origin. In [8], Llibre and Mereu used the averaging theory to study the maximum number of limit cycles of a class of generalized polynomial Kukles differential system of the form
\begin{equation}\label{eq3} \left\{ \begin{array}{l} \dot{x}=y \\ \dot{y}=-x-\underset{k\text{ }\geq 1}{\sum }\varepsilon ^{k}\left( f_{n_1}^{% \text{ }k}\left( x\right) +g_{n_2}^{k}\left( x\right) y+h_{n_3}^{k}\left( x\right) y^{2}+d_{0}^{\text{ }k}y^{3}\right) ,% \end{array}% \right. \end{equation}
(3)
where for every k the polynomials \(f_{n_1}^{k}, g_{n_2}^{k}\) and \(h_{n_3}^{k}\) have degree \(n_1,n_2\) and \(n_3\) respectively, \(d_{0}^{k}\) real number different from zero and \(\varepsilon\) is a small parameter.

In [9], Boulfoul et al., used the averaging theory to study the maximum number of limit cycles of a class of generalized polynomial Kukles differential system of the form

\begin{equation}\label{eq4} \left\{d \begin{array}{l} \dot{x}=-y \\ \dot{y}=x- f( x) -g( x) y-h( x) y^{2}-l(x)y^{3} ,% \end{array}% \right. \end{equation}
(4)
where \(f(x)=\varepsilon f_{1}(x)+\varepsilon^{2} f_{2}(x), g(x)=\varepsilon g_{1}(x)+\varepsilon^{2} g_{2}(x), h(x)=\varepsilon h_{1}(x)+\varepsilon^{2} h_{2}(x)\) and \(l(x)=\varepsilon l_{1}(x)+\varepsilon^{2} l_{2}(x)\) have degree \(n_{1},n_{2},n_{3}\) and \(n_{4}\) respectively, and \(\varepsilon\) is a small parameter.

In this paper, by using the averaging theory, we study the maximum number of limit cycles which can bifurcate from the periodic orbits of the linear center \(\dot{x}=y,\dot{y}=-x\) perturbed inside the class of generalized polynomial Kukles differential system of the form
\begin{equation}\label{eq5} \left\{ \begin{array}{l} \dot{x}=y \\ \dot{y}=-x- f(x)y^{2p} -g(x)y^{2p+1}-h(x)y^{2p+2}-l(x)y^{2p+3} ,% \end{array}% \right. \end{equation}
(5)
where \(f(x)=\varepsilon f^{1}(x)+\varepsilon^{2} f^{2}(x)\), \(g(x)=\varepsilon g^{1}(x)+\varepsilon^{2} g^{2}(x)\), \(h(x)=\varepsilon h^{1}(x)+\varepsilon^{2} h^{2}(x)\), and \(l(x)=\varepsilon l^{1}(x)+\varepsilon^{2} l^{2}(x)\), where \(f^{k}(x),g^{k}(x),h^{k}(x)\) and \(l^{k}(x)\) have degree \(n_{1}, n_{2},\) \(n_{3}, n_{4}\) respectively for \(k=1, 2.\) \(p\) is a positive integer and \(\varepsilon\) is a small parameter. The main results of this paper is the following theorem:

Theorem 1. For \(\left\vert {\varepsilon}\right\vert \) sufficiently small, the maximum number of limit cycles of the polynomial Kukles differential system 5 which can bifurcate from the periodic orbits of the linear center \(\dot{x}=y, \dot{y}=-x\),

  1. is \( \max \left\{\left[ \frac{n_{2}}{2}\right] ,\left[ \frac{n_{4}}{2}\right]+1 \right\}, \) by using averaging theory of first order,
  2. and \( \max \left\{ \left[ \frac{n_{2}}{2}\right] ,\left[ \frac{n_{4}}{2}\right]+1 ,% \left[ \frac{n_{1}}{2}\right] \right. +\left[ \frac{n_{2}-1}{2}\right] +p,% \left[ \frac{n_{1}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right] +p+1, \left[ \frac{n_{1}-1}{2}\right] +\mu +p,\left[ \frac{n_{2}-1}{2}\right] +% \left[ \frac{n_{3}}{2}\right] +p+1,\left[ \frac{n_{3}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right] +p+2, \left. \left[ \frac{n_{3}-1}{2}\right] +\mu +p+1\right\} , \) by using averaging theory of second order, where \(\mu =\min \left\{ \left[ \frac{n_{2}}{2}\right] ,\left[ \frac{n_{4}}{2}% \right]+1 \right\} \).

Our paper is organized as; in S2, we introduce the averaging theory of first and second order. Then in S3, we prove our main theorem using the tools mentioned in S4. And finally, we concluded our study by giving some applications.

2. The averaging theory of first and second order

The averaging theory of first and second order, for studying periodic orbits, was developed in [6, 15]. The following result is Theorem 4.2 of [6].

Theorem 2. Consider the differential system

\begin{equation}\label{1.2} \dot x(t)=\varepsilon F_{1}(t,x)+ \varepsilon^2 F_{2}(t,x)+\varepsilon^{3}R(t,x,\varepsilon), \end{equation}
(6)
where \(F_{1},F_{2}:\mathbb{R}\times D \rightarrow \mathbb{R}^n, R:\mathbb{R} \times D \times (-\varepsilon_{f},\varepsilon_{f}) \rightarrow \mathbb{R}^n\) are continuous functions, \(T\)-periodic in the first variable, and \(D\) is an open subset of \(\mathbb{R}^n\). Assume that
  1. \(F_{1}(t,.)\in C^{1}(D)\) for all \(t \in \mathbb{R}\), \(F_{2}, R\) and \(D_{x}F_{1}\) are locally Lipschitz with respect to \(x\), and \(R\) is differentiable with respect to \(\varepsilon\). Define \(f_{1},f_{2}: D \rightarrow \mathbb{R}\) by
    \begin{equation}\label{1.13} \left.\begin{array}{rl} f_{1}(z)=& \dfrac{1}{T}\displaystyle \int^T_{0} F_{1}(s ,z)ds,\;\;\\ f_{2}(z)=& \dfrac{1}{T} \displaystyle \int^T_{0} \left[ D_{z}F_{1}(s,z) \int^s_{0} F_{1}(t,z)dt + F_{2}(s,z) \right] ds . \end{array}\right\} \end{equation}
    (7)
  2. For \(V \subset D\) an open and bounded set and for \(\varepsilon \in (-\varepsilon_{f},\varepsilon_{f}) \setminus \{0\} \), there exists \(a_{\varepsilon} \in V\) such that \(f_{1}(a_{\varepsilon})+ \varepsilon f_{2}(a_{\varepsilon})=0\) and \(d_{B}(f_{1}+ \varepsilon f_{2},V,a)\neq 0\).
Then for \(|\varepsilon|>0\) sufficiently small there exists a \(T\)-periodic solution \(\varphi(\cdot, \varepsilon)\) of the system (6) such that \(\varphi(0, \varepsilon)\to a\) when \(\varepsilon\to 0\). If \(f_{1}\) is not identically zero, then the zeros of \(f_{1}+\varepsilon f_{2}\) are mainly the zeros of \(f_{1}\) for \(\varepsilon\) sufficiently small. In this case the previous result provides the averaging theory of first order. If \(f_{1}\) is identically zero and \(f_{2}\) is not identically zero, then the zeros of \( f_{1}+\varepsilon f_{2} \) are mainly the zeros of \(f_{2}\) for \(\varepsilon\) sufficiently small. In this case the previous result provides the averaging theory of second order. For additional information on the averaging theory see the books [7, 16].

3. Proof of statement \((a)\) and \((b)\) of Theorem 1

In this proof, we use the first order averaging theory. So, we write the system 5 in polar coordinates \((r,\theta)\) where \(x=rcos\theta\), \(y=rsin\theta\), \(r>0\). We write the polynomials \(f^{1}(x), g^{1}(x),h^{1}(x)\) and \(l^{1}(x)\) which appear in 5 as,
\begin{equation}\label{ss} f^{1}(x)=\displaystyle\sum_{i=0}^{n_{1}}a_{i}x^{i}, g^{1}(x)=% \displaystyle\sum_{i=0}^{n_{2}}b_{i}x^{i},h^{1}(x)=% \displaystyle\sum_{i=0}^{n_{3}}c_{i}x^{i}\;\text{and} \;l^{1}(x)=% \displaystyle\sum_{i=0}^{n_{4}}d_{i}x^{i}.\\ \end{equation}
(8)
Therefore system 5 becomes \begin{eqnarray*} \dot{r} &=&-\varepsilon \left(\sum_{i=0}^{n_{1}}a_{i}r^{i+2p}\cos ^{i}\theta \sin ^{2p+1}\theta +\sum_{i=0}^{n_{2}}b_{i}r^{i+2p+1}\cos ^{i}\theta \sin ^{2p+2}\theta \right. \\ &&\left.+\sum_{i=0}^{n_{3}}c_{i}r^{i+2p+2}\cos ^{i}\theta \sin ^{2p+3}\theta +\sum_{i=0}^{n_{4}}d_{i}r^{i+2p+3}\cos ^{i}\theta \sin ^{2p+4}\theta \right),\\ \dot{\theta} &=&-1-\frac{\varepsilon }{r}\left(\sum_{i=0}^{n_{1}}a_{i}r^{i+2p}\cos ^{i+1}\theta \sin ^{2p}\theta +\sum_{i=0}^{n_{2}}b_{i}r^{i+2p+1}\cos ^{i+1}\theta \sin ^{2p+1}\theta \right. \\ &&\left.+\sum_{i=0}^{n_{3}}c_{i}r^{i+2p+2}\cos ^{i+1}\theta \sin ^{2p+2}\theta +\sum_{i=0}^{n_{4}}d_{i}r^{i+2p+3}\cos ^{i+1}\theta \sin ^{2p+3}\theta \right).\\ \end{eqnarray*} Taking \(\theta\) as the new independent variable, we get \begin{eqnarray*} \dfrac{dr}{d\theta} &=&\varepsilon \left(\sum_{i=0}^{n_{1}}a_{i}r^{i+2p}\cos ^{i}\theta \sin ^{2p+1}\theta +\sum_{i=0}^{n_{2}}b_{i}r^{i+2p+1}\cos ^{i}\theta \sin ^{2p+2}\theta \right. \\ &&\left.+\sum_{i=0}^{n_{3}}c_{i}r^{i+2p+2}\cos ^{i}\theta \sin ^{2p+3}\theta +\sum_{i=0}^{n_{4}}d_{i}r^{i+2p+3}\cos ^{i}\theta \sin ^{2p+4}\theta \right)+O(\varepsilon ^{2}),\\ &=&\varepsilon F_{1}(r,\theta)+O(\varepsilon ^{2}). \end{eqnarray*} Let \(F_{10}\) be the averaging equation of first order associated with system 5. Using the notation introduced in Theorem \ref{t2}, we compute \(F_{10}\) by integrating \(F_{1}\) with respect to \(\theta\), $$F_{10}(r)=\dfrac{1}{2\pi }\int_{0}^{2\pi}F_{1}(r,\theta)d\theta.$$

Lemma 1. Let \(A_{i,j}\left( \theta \right) =\cos ^{i}\theta \sin ^{j}\theta \) and \( \theta\xi _{i,j}\left( \theta \right) =\int_{0}^{\theta }A_{i,j}(s)ds,\) where \begin{eqnarray*} \int_{0}^{2\pi }A_{i,j}(\theta )d\theta &=&\left\{ \begin{array}{c} 0, \;\;\; if \;\;\; i\text{ is}\text{ odd}\text{ or}\; j \text{ is}\text{ odd}, \\ 2\pi\xi _{i,j}\left( 2\pi \right), \;\;\; if\;\; i\text{ }and\text{ }j \text{ }are\text{ }even, \end{array} \right. \\ && \end{eqnarray*} and \begin{eqnarray*} \xi_{2i,2j+4}(2\pi ) &=&\frac{2j+3}{2i+2j+4}\xi_{2i,2j+2}(2\pi ). \end{eqnarray*}

Using Lemma 1, we obtain the integral of the function \(F_{10}(r)\)
\begin{eqnarray}\label{ev1} F_{10}\left( r\right) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum _{\substack{ i=0 \\ i\text{ }even}}^{n_{2}}b_{i}r^{i+2p+1}A_{i,2p+2}\left( \theta \right) +\sum_{\substack{ i=0 \\ i\text{ }even}}% ^{n_{4}}d_{i}r^{i+2p+3}A_{i,2p+4}\left( \theta \right) \right) d\theta \nonumber \\ &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}}{2}% \right] }b_{2i}r^{2i+2p+1}A_{2i,2p+2}\left( \theta \right) +\sum_{i=0}^{% \left[ \frac{n_{4}}{2}\right] }d_{2i}r^{2i+2p+3}A_{2i,2p+4}\left( \theta \right) \right) d\theta \nonumber\\ &=& r^{2p+1}\left( \sum_{i=0}^{\left[ \frac{n_{2}}{2}\right] }b_{2i}r^{2i}\xi _{2i,2p+2}\left( 2\pi \right) +\sum_{i=0}^{\left[ \frac{% n_{4}}{2}\right] }d_{2i}r^{2i+2}\xi _{2i,2p+4}\left( 2\pi \right) \right)\nonumber \\ &=& r^{2p+1}\left( \sum_{i=0}^{\left[ \frac{n_{2}}{2}\right] }b_{2i}r^{2i}\xi _{2i,2p+2}\left( 2\pi \right) +\sum_{i=0}^{\left[ \frac{% n_{4}}{2}\right] }\frac{2p+3}{2i+2p+4}d_{2i}r^{2i+2}\xi _{2i,2p+2}\left( 2\pi \right) \right) . \end{eqnarray}
(9)
Then the polynomial \(F_{10}(r)\) has at most \(\max \left\{ \left[ \frac{n_{2}}{2}\right] ,\left[ \frac{n_{4}}{2}\right] +1\right\} \) positive roots. Hence statement (a) of Theorem (1) is proved. For proving statement (b) of Theorem 1, we will use the second-order averaging theory. We take \(f^{1}(x), g^{1}(x),h^{1}(x)\) and \(l^{1}(x)\) as defined in (8), and let
\begin{equation} f^{2}(x)=\displaystyle\sum_{i=0}^{n_{1}}\bar{a_{i}}x^{i}, g^{2}(x)=% \displaystyle\sum_{i=0}^{n_{2}}\bar{b_{i}}x^{i},h^{2}(x)=% \displaystyle\sum_{i=0}^{n_{3}}\bar{c_{i}}x^{i}\text{ \ }\text{and}\text{ \ } l^{2}(x)=% \displaystyle\sum_{i=0}^{n_{4}}\bar{d_{i}}x^{i}.\\ \end{equation}
(10)
In polar coordinates \((r,\theta)\) where \(x=rcos\theta\), \(y=rsin\theta\), \(r>0\), the differential system 5 becomes
\begin{equation} \left\{ \begin{array}{c} \dot{r}=-\varepsilon G_{1}\left( r,\theta \right) -\varepsilon ^{2}H_{1}\left( r,\theta \right) , \\ \dot{\theta}=-1-\frac{\varepsilon }{r}G_{2}\left( r,\theta \right) -\frac{% \varepsilon ^{2}}{r}H_{2}\left( r,\theta \right) .% \end{array} \right. \label{at2} \end{equation}
(11)
Taking \(\theta\) as new independent variable, we find \( \dfrac{dr}{d\theta} =\varepsilon F_{1}(r,\theta)+\varepsilon^{2}F_{2}(r,\theta)+O(\varepsilon ^{3}), \) where
\begin{equation}\label{d2} F_{1}(r,\theta )=G_{1}\text{ and }F_{2}(r,\theta )=H_{1}-% \frac{1}{r}G_{1}G_{2}. \end{equation}
(12)
And
\begin{equation}\label{d21} \begin{cases} G_{1} =\sum\limits_{i=0}^{n_{1}}a_{i}r^{i+2p}A_{i,2p+1}(\theta) +\sum\limits_{i=0}^{n_{2}}b_{i}r^{i+2p+1}A_{i,2p+2}(\theta)+\sum\limits_{i=0}^{n_{3}}c_{i}r^{i+2p+2}A_{i,2p+3}(\theta) +\sum\limits_{i=0}^{n_{4}}d_{i}r^{i+2p+3}A_{i,2p+4}(\theta) ,\\ H_{1} =\sum\limits_{i=0}^{n_{1}}\bar{a}_{i}r^{i+2p}A_{i,2p+1}(\theta) +\sum\limits_{i=0}^{n_{2}}\bar{b}_{i}r^{i+2p+1}A_{i,2p+2}(\theta)+\sum\limits_{i=0}^{n_{3}}\bar{c}_{i}r^{i+2p+2}A_{i,2p+3}(\theta) +\sum\limits_{i=0}^{n_{4}}\bar{d}_{i}r^{i+2p+3}A_{i,2p+4}(\theta) ,\\ G_{2} =\sum\limits_{i=0}^{n_{1}}a_{i}r^{i+2p}A_{i+1,2p}(\theta) +\sum\limits_{i=0}^{n_{2}}b_{i}r^{i+2p+1}A_{i+1,2p+1}(\theta)+\sum\limits_{i=0}^{n_{3}}c_{i}r^{i+2p+2}A_{i+1,2p+2}(\theta) +\sum\limits_{i=0}^{n_{4}}d_{i}r^{i+2p+3}A_{i+1,2p+3}(\theta) ,\\ H_{2} =\sum\limits_{i=0}^{n_{1}}\bar{a}_{i}r^{i+2p}A_{i+1,2p}(\theta) +\sum\limits_{i=0}^{n_{2}}\bar{b}_{i}r^{i+2p+1}A_{i+1,2p+1}(\theta)+\sum\limits_{i=0}^{n_{3}}\bar{c}_{i}r^{i+2p+2}A_{i+1,2p+2}(\theta) +\sum\limits_{i=0}^{n_{4}}\bar{d}_{i}r^{i+2p+3}A_{i+1,2p+3}(\theta) .\end{cases} \end{equation}
(13)
In order to apply averaging theory of second order, \(F_{10}\) must be identically equal to zero. Therefore from (9), we take
\begin{eqnarray}\label{ana1} &&\left\{ \begin{array}{c} b_{2i}=-\frac{2p+3}{2i-1}d_{2i-2}\;\;\; 1\leq i\leq \mu , \\ b_{0}=b_{2i}=d_{2i-2}=0\;\;\; \mu +1\leq i\leq \lambda , \end{array} \right. \end{eqnarray}
(14)
where \(\lambda =\max \left\{ \left[ \frac{n_{2}}{2}\right] ,\left[ \frac{% n_{4}}{2}\right] +1\right\} .\) Let \(F_{20}\) be the averaging equation of second order associated with system 5. Now, we determine \(F_{20}\) by integrating with respect to \(\theta\) \begin{eqnarray*} F_{20}(r)&=&F_{20}^{1}(r)+F_{20}^{2}(r), \end{eqnarray*} where \begin{eqnarray*} F_{20}^{1}(r)&=&\dfrac{1}{2\pi}\int_{0}^{2\pi}\dfrac{d}{dr}F_{1}(r,\theta)y(r,\theta)d\theta\;\;\;\text{ and }\;\;\; F_{20}^{2}(r)=\dfrac{1}{2\pi}\int_{0}^{2\pi}F_{2}(r,\theta)d\theta. \end{eqnarray*} By substituting (14) in (13) and (12), we get
\( F_{1}(r,\theta ) =\sum\limits_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p+1}+\sum\limits_{i=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p}+ \sum\limits_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+2}+\sum\limits_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+3}+ \sum\limits_{i=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+2}+\sum\limits_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+4}+ \sum\limits_{i=1}^{\mu }\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{% 2i-1}A_{2i,2p+2}\left( \theta \right) \right) d_{2i-2}r^{2i+2p+1}. \)
To compute \(F_{20}^{1}\), we must derive \(F_{1}\), so
\( \frac{dF_{1}(r,\theta )}{dr} =\sum\limits_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}+ \sum\limits_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}+ \sum\limits_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}+ \sum\limits_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}+ \sum\limits_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}% \left( \theta \right) r^{2i+2p+1}+ \sum\limits_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}+ \sum\limits_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1}A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}. \)
And we have
\( y(r,\theta) =\int_{0}^{\theta }F_{1}(r,\theta )d\theta =\sum\limits_{i=0}^{\left[ \frac{n_{1}-1% }{2}\right] }a_{2i+1}r^{2i+2p+1}\left( \beta _{i,p,0}+\sum\limits_{l=1}^{i+p+1}\beta _{i,p,l}\cos \left( 2l\right) \theta \right) +\sum\limits_{i=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2i}r^{2i+2p}\left( \tilde{% \beta}_{i,p,0}+\sum\limits_{l=1}^{i+p+1}\tilde{\beta}_{i,p,l}\cos \left( 2l-1\right) \theta \right) +\sum\limits_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2i+1}r^{2i+2p+2}% \sum\limits_{l=0}^{i+p+1}\bar{\beta}_{i,p,l}\sin \left( 2l+1\right) \theta +\sum\limits_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2i+1}r^{2i+2p+3}\left( \gamma _{i,p,0}+\sum\limits_{l=1}^{i+p+2}\gamma _{i,p,l}\cos \left( 2l\right) \theta \right) +\sum\limits_{i=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2i}r^{2i+2p+2}\left( \tilde{% \gamma}_{i,p,0}+\sum\limits_{l=1}^{i+p+2}\tilde{\gamma}_{i,p,l}\cos \left( 2l-1\right) \theta \right) +\sum\limits_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2i+1}r^{2i+2p+4}% \sum\limits_{l=0}^{i+p+2}\bar{\gamma}_{i,p,l}\sin (2l+1)\theta +\sum\limits_{i=1}^{\mu }d_{2i-2}r^{2i+2p+1}\sum\limits_{l=1}^{i+p+1}\delta _{i,p,l}\sin (2l)\theta , \)
where \(\beta_{i,p,l}, \tilde{\beta}_{i,p,l}, \bar{\beta}_{i,p,l}, \gamma _{i,p,l}, \tilde{\gamma}_{i,p,l}, \bar{\gamma}_{i,p,l}\) and \(\delta _{i,p,l}\) are constants. Now, we define as \begin{equation*} F_{20}^{1}(r)=\Upsilon _{1}(r)+\Upsilon _{2}(r)+\Upsilon _{3}(r)+\Upsilon _{4}(r)+\Upsilon _{5}(r)+\Upsilon _{6}(r)+\Upsilon _{7}(r), \end{equation*} such that \begin{eqnarray*} \Upsilon _{1}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) y(r,\theta )d\theta ,\text{ } \\ \Upsilon _{2}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) y(r,\theta )d\theta , \\ \Upsilon _{3}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) y(r,\theta )d\theta , \\ \Upsilon _{4}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) y(r,\theta )d\theta , \\ \Upsilon _{5}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) y(r,\theta )d\theta , \\ \Upsilon _{6}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) y(r,\theta )d\theta , \\ \Upsilon _{7}(r) &=&\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) y(r,\theta )d\theta . \end{eqnarray*} In the following Lemmas, we will compute the integrals \(\Upsilon _{1}(r)-\Upsilon _{7}(r)\).

Lemma 2. The integral \(\Upsilon _{1}(r)\) is given by the following

\begin{eqnarray}\label{p1} \Upsilon _{1}(r) &=&\sum_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }\sum_{s=1}^{\mu }\frac{(2i+2p+1)}{2}a_{2i+1}d_{2s-2}\sum_{l=1}^{s+p+1}\delta _{s,p,l}D_{i,p,l}r^{2i+2s+4p+1}. \end{eqnarray}
(15)

Proof. By using the integrals in Appendix, we get \begin{equation*} (a_{1})\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}\left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right) \right)d\theta =0, \end{equation*} \(.\) \begin{equation*} (b_{1})\qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{ n_{1}}{2}\right] }a_{2s}r^{2s+2p}\left( \tilde{\beta}_{s,p,0}+ \sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( c_{1}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{% 2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2s+1}r^{2s+2p+2}\sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( d_{1}\right)\qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{% 2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{% n_{3}-1}{2}\right] }c_{2s+1}r^{2s+2p+3}\left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( e_{1}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{% 2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}\left( \tilde{\gamma}% _{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( f_{1}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{% 2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{4}-1}{2}% \right] }d_{2s+1}r^{2s+2p+4}\sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}\sin (2l+1)\theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( g_{1}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}-1}{% 2}\right] }(2i+2p+1)a_{2i+1}A_{2i+1,2p+1}\left( \theta \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right)d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }\sum_{s=1}^{\mu }\frac{(2i+2p+1)}{2}a_{2i+1}d_{2s-2}\sum_{l=1}^{s+p+1}\delta _{s,p,l}D_{i,p,l}r^{2i+2s+4p+1}. \end{equation*} We observe that the sum of the integrals \(\left( a_{1}\right)-\left( g_{1}\right)\) is the polynomial (15). This ends the proof of Lemma (2).

Lemma 3. The integral \(\Upsilon _{2}(r)\) is given by the following,

\begin{eqnarray}\label{p2} \Upsilon _{2}(r) &=&\sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }(i+p)a_{2i}b_{2s+1}% \sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}C_{i,p,l}r^{2i+2s+4p+1}\nonumber\\ &&+\sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }(i+p)a_{2i}d_{2s+1}\sum_{l=0}^{s+p+2}\bar{\gamma}% _{s,p,l}C_{i,p,l}r^{2i+2s+4p+3}. \end{eqnarray}
(16)

Proof. By using the integrals in Appendix, we get \begin{equation*} \left( a_{2}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}\left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( b_{2}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}% }{2}\right] }a_{2s}r^{2s+2p}\left( \tilde{\beta}_{s,p,0}+\sum_{l=1}^{s+p+1}% \tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( c_{2}\right) \qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{2}-1}{2}% \right] }b_{2s+1}r^{2s+2p+2}\sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }(i+p)a_{2i}b_{2s+1}% \sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}C_{i,p,l}r^{2i+2s+4p+1}, \end{equation*} \begin{equation*} \left( d_{2}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2s+1}r^{2s+2p+3}\left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( e_{2}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}\left( \tilde{\gamma}% _{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( f_{2}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2s+1}r^{2s+2p+4}\sum_{l=0}^{s+p+2}\bar{\gamma}% _{s,p,l}\sin (2l+1)\theta \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }(i+p)a_{2i}d_{2s+1}\sum_{l=0}^{s+p+2}\bar{\gamma} _{s,p,l}C_{i,p,l}r^{2i+2s+4p+3}, \end{equation*} \begin{equation*} \left( g_{2}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{1}}{2}\right] }(2i+2p)a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p-1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right) d\theta =0. \end{equation*} We observe that the sum of the integrals \(\left( a_{2}\right)-\left( g_{2}\right)\) is the polynomial (16). This ends the proof of Lemma (3).

Lemma 4. The integral \(\Upsilon _{3}(r)\) is given by the following,

\begin{eqnarray}\label{p3} \Upsilon _{3}(r) &=&\sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{1}% }{2}\right] }(i+p+1)b_{2i+1}a_{2s}\sum_{l=1}^{s+p+1}\tilde{\beta}% _{s,p,l}E_{i,p,l}r^{2i+2s+4p+1}\nonumber\\ &&+\sum_{i=0}^{\left[ \frac{n_{2}-1}{2% }\right] }\sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }(i+p+1)b_{2i+1}c_{2s}% \sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}E_{i,p,l}r^{2i+2s+4p+3}. \end{eqnarray}
(17)

Proof. By using the integrals in Appendix, we get \begin{equation*} \left( a_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}\left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( b_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1} }{2}\right] }a_{2s}r^{2s+2p}\left( \tilde{\beta}_{s,p,0}+\sum_{l=1}^{s+p+1} \tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta = \end{equation*} \begin{eqnarray*} && \\ &&\sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{1}% }{2}\right] }(i+p+1)b_{2i+1}a_{2s}\sum_{l=1}^{s+p+1}\tilde{\beta}% _{s,p,l}E_{i,p,l}r^{2i+2s+4p+1}, \end{eqnarray*} \begin{equation*} \left( c_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{ n_{2}-1}{2}\right] }b_{2s+1}r^{2s+2p+2}\sum_{l=0}^{s+p+1}\bar{\beta} _{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( d_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}-1}{2} \right] }c_{2s+1}r^{2s+2p+3}\left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( e_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}\left( \tilde{\gamma}% _{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{2}-1}{2 }\right] }\sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }(i+p+1)b_{2i+1}c_{2s}% \sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}E_{i,p,l}r^{2i+2s+4p+3}, \end{equation*} \begin{equation*} \left( f_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{ n_{4}-1}{2}\right] }d_{2s+1}r^{2s+2p+4}\sum_{l=0}^{s+p+2}\bar{\gamma}% _{s,p,l}\sin (2l+1)\theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( g_{3}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }(2i+2p+2)b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right) d\theta =0. \end{equation*} We observe that the sum of the integrals \(\left( a_{3}\right)-\left( g_{3}\right)\) is the polynomial 17. This ends the proof of Lemma 4.

Lemma 5. The integral \(\Upsilon _{4}(r)\) is given by the following

\begin{eqnarray}\label{p4} \Upsilon _{4}(r) &=&\sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }\sum_{s=1}^{\mu }\frac{(2i+2p+3)% }{2}c_{2i+1}d_{2s-2}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\tilde{D}_{i,p,l}r^{2i+2s+4p+3}. \end{eqnarray}
(18)

Proof. By using the integrals in Appendix, we get \begin{equation*} \left( a_{4}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}\left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( b_{4}\right) \qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2s}r^{2s+2p}% \left( \tilde{\beta}_{s,p,0}+\sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( c_{4}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2s+1}r^{2s+2p+2}% \sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( d_{4}\right) \qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2s+1}r^{2s+2p+3}\left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( e_{4}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}% \left( \tilde{\gamma}_{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( f_{4}\right) \qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2s+1}r^{2s+2p+4}\sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}\sin (2l+1)\theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( g_{4}\right) \qquad\frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }(2i+2p+3)c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+2}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }\sum_{s=1}^{\mu }\frac{(2i+2p+3) }{2}c_{2i+1}d_{2s-2}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\tilde{D}_{i,p,l}r^{2i+2s+4p+3}. \end{equation*} We observe that the sum of the integrals \(\left( a_{4}\right)-\left( g_{4}\right)\) is the polynomial 18. This ends the proof of Lemma 5.

Lemma 6. The integral \(\Upsilon _{5}(r)\) is given by the following

\begin{eqnarray}\label{p5} \Upsilon _{5}(r) &=&\sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{2}-1}{% 2}\right] }(i+p+1)c_{2i}b_{2s+1}\sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\tilde{C% }_{i,p,l}r^{2i+2s+4p+3}\nonumber\\ &&+\sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{4}-1}{% 2}\right] }(i+p+1)c_{2i}d_{2s+1}\sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}% \tilde{C}_{i,p,l}r^{2i+2s+4p+5}. \end{eqnarray}
(19)

Proof. By using the integrals in Appendix, we get \begin{equation*} \left( a_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}\left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( b_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2s}r^{2s+2p}% \left( \tilde{\beta}_{s,p,0}+\sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( c_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2s+1}r^{2s+2p+2}% \sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{2}-1}{% 2}\right] }(i+p+1)c_{2i}b_{2s+1}\sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\tilde{C% }_{i,p,l}r^{2i+2s+4p+3}, \end{equation*} \begin{equation*} \left( d_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2s+1}r^{2s+2p+3}\left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( e_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}\left( \tilde{\gamma}_{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \end{equation*} \begin{eqnarray*} &&\left( f_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{% \left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \\ && \end{eqnarray*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2s+1}r^{2s+2p+4}\sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}\sin (2l+1)\theta \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{4}-1}{% 2}\right] }(i+p+1)c_{2i}d_{2s+1}\sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}% \tilde{C}_{i,p,l}r^{2i+2s+4p+5}, \end{equation*} \begin{equation*} \left( g_{5}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{3}}{2}\right] }(2i+2p+2)c_{2i}A_{2i,2p+3}\left( \theta \right) r^{2i+2p+1}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right) d\theta =0. \end{equation*} We observe that the sum of the integrals \(\left( a_{5}\right)-\left( g_{5}\right)\) is the polynomial (19). This ends the proof of Lemma 6.

Lemma 7. The integral \(\Upsilon _{6}(r)\) is given by the following,

\begin{eqnarray}\label{p6} \Upsilon _{6}(r) &=&\sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{1}}{% 2}\right] }(i+p+2)d_{2i+1}a_{2s}\sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}% \tilde{E}_{i,p,l}r^{2i+2s+4p+3} \nonumber\\ &&+\sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{3}}{% 2}\right] }(i+p+2)d_{2i+1}c_{2s}\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}% \tilde{E}_{i,p,l}r^{2i+2s+4p+5}. \end{eqnarray}
(20)

Proof. By using the integrals in Appendix, we get \begin{equation*} \left( a_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}\left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right)\right)d\theta =0, \end{equation*} \begin{equation*} \left( b_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2s}r^{2s+2p}% \left( \tilde{\beta}_{s,p,0}+\sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{1}}{% 2}\right] }(i+p+2)d_{2i+1}a_{2s}\sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}% \tilde{E}_{i,p,l}r^{2i+2s+4p+3}, \end{equation*} \begin{equation*} \left( c_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2s+1}r^{2s+2p+2}% \sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta =0, \end{equation*} \begin{equation*} \end{equation*} \begin{equation*} \left( d_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2s+1}r^{2s+2p+3}\left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( e_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}\left( \tilde{\gamma}_{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }\sum_{s=0}^{\left[ \frac{n_{3}}{% 2}\right] }(i+p+2)d_{2i+1}c_{2s}\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}% \tilde{E}_{i,p,l}r^{2i+2s+4p+5}, \end{equation*} \begin{equation*} \left( f_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2s+1}r^{2s+2p+4}% \sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}\sin (2l+1)\theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( g_{6}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }(2i+2p+4)d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+3}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right) d\theta =0. \end{equation*} We observe that the sum of the integrals \(\left( a_{6}\right)-\left( g_{6}\right)\) is the polynomial (20). This ends the proof of Lemma 7.

Lemma 8. The integral \(\Upsilon _{7}(r)\) is given by the following,

\begin{eqnarray}\label{p7} \Upsilon _{7}(r) &=&\sum_{i=1}^{\mu }\sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }\frac{(2i+2p+1)% }{2}d_{2i-2}a_{2s+1}\sum_{l=0}^{s+p+1}\beta _{s,p,l}\left( \tilde{F}_{i,p,l}-% \frac{2p+3}{2i-1}F_{i,p,l}\right) r^{2i+2s+4p+1} \nonumber\\ &&+\sum_{i=1}^{\mu }\sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }\frac{(2i+2p+1)% }{2}d_{2i-2}c_{2s+1}\sum_{l=0}^{s+p+2}\gamma _{s,p,l}\left( \tilde{F}% _{i,p,l}-\frac{2p+3}{2i-1}F_{i,p,l}\right) r^{2i+2s+4p+3}. \end{eqnarray}
(21)

Proof. By using the integrals in Appendix, we get \begin{equation*} \left( a_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2s+1}r^{2s+2p+1}% \left( \beta _{s,p,0}+\sum_{l=1}^{s+p+1}\beta _{s,p,l}\cos \left( 2l\right) \theta \right)\right)d\theta = \end{equation*} \begin{equation*} \sum_{i=1}^{\mu }\sum_{s=0}^{\left[ \frac{n_{1}-1}{2}\right] }\frac{(2i+2p+1)% }{2}d_{2i-2}a_{2s+1}\sum_{l=0}^{s+p+1}\beta _{s,p,l}\left( \tilde{F}_{i,p,l}-% \frac{2p+3}{2i-1}F_{i,p,l}\right) r^{2i+2s+4p+1}, \end{equation*} \begin{equation*} \left( b_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2s}r^{2s+2p}% \left( \tilde{\beta}_{s,p,0}+\sum_{l=1}^{s+p+1}\tilde{\beta}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right)d\theta =0, \end{equation*} \begin{equation*} \left( c_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2s+1}r^{2s+2p+2}% \sum_{l=0}^{s+p+1}\bar{\beta}_{s,p,l}\sin \left( 2l+1\right) \theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( d_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2s+1}r^{2s+2p+3}% \left( \gamma _{s,p,0}+\sum_{l=1}^{s+p+2}\gamma _{s,p,l}\cos \left( 2l\right) \theta \right) \right) d\theta = \end{equation*} \begin{equation*} \sum_{i=1}^{\mu }\sum_{s=0}^{\left[ \frac{n_{3}-1}{2}\right] }\frac{(2i+2p+1)% }{2}d_{2i-2}c_{2s+1}\sum_{l=0}^{s+p+2}\gamma _{s,p,l}\left( \tilde{F}% _{i,p,l}-\frac{2p+3}{2i-1}F_{i,p,l}\right) r^{2i+2s+4p+3}, \end{equation*} \begin{equation*} \left( e_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2s}r^{2s+2p+2}\left( \tilde{\gamma}_{s,p,0}+\sum_{l=1}^{s+p+2}\tilde{\gamma}_{s,p,l}\cos \left( 2l-1\right) \theta \right) \right) d\theta =0, \end{equation*} \begin{equation*} \left( f_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2s+1}r^{2s+2p+4}% \sum_{l=0}^{s+p+2}\bar{\gamma}_{s,p,l}\sin (2l+1)\theta \right) d\theta =0, \end{equation*} \begin{equation*} \left( g_{7}\right)\qquad \frac{1}{2\pi }\int_{0}^{2\pi }\left( \sum_{i=1}^{\mu }(2i+2p+1)d_{2i-2}\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1% }A_{2i,2p+2}\left( \theta \right) \right) r^{2i+2p}\right) \times \end{equation*} \begin{equation*} \left( \sum_{s=1}^{\mu }d_{2s-2}r^{2s+2p+1}\sum_{l=1}^{s+p+1}\delta _{s,p,l}\sin (2l)\theta \right) d\theta =0. \end{equation*} We observe that the sum of the integrals \(\left( a_{7}\right)-\left( g_{7}\right)\) is the polynomial (21). This ends the proof of Lemma 8.

By Lemmas 2-8, we obtain \(F_{20}^{1}\left( r\right) =r^{1+4p}P_{1}\left( r^{2}\right) \), where \(% P_{1}\left( r^{2}\right) \) is a polynomial of degree \[ \max \left\{ \left[ \frac{n_{1}}{2}\right] \right. +\left[ \frac{n_{2}-1}{2}\right], \left[ \frac{n_{1}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right]+1,\left[ \frac{n_{1}-1}{2}\right] +\mu, \] \[ \left. \left[ \frac{n_{2}-1}{2}\right] + \left[ \frac{n_{3}}{2}\right]+1,\left[ \frac{n_{3}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right]+2,\left[ \frac{n_{3}-1}{2}\right] +\mu+1\right\} . \] Again by substituting (14) in (13) and (12), we obtain
\(F_{2}(r,\theta ) =\sum\limits_{i=0}^{n_{1}}\bar{a}_{i}r^{i+2p}A_{i,2p+1}\left( \theta \right) +\sum\limits_{i=0}^{n_{2}}\bar{b}_{i}r^{i+2p+1}A_{i,2p+2}\left( \theta \right)+ \sum\limits_{i=0}^{n_{3}}\bar{c}_{i}r^{i+2p+2}A_{i,2p+3}\left( \theta \right) +\sum\limits_{i=0}^{n_{4}}\bar{d}_{i}r^{i+2p+3}A_{i,2p+4}\left( \theta \right) - \dfrac{1}{r}\left( \sum\limits_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }a_{2i+1}A_{2i+1,2p+1}% \left( \theta \right) r^{2i+2p+1}+\sum\limits_{i=0}^{\left[ \frac{n_{1}}{2}\right] }a_{2i}A_{2i,2p+1}\left( \theta \right) r^{2i+2p}+\right. \sum\limits_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }b_{2i+1}A_{2i+1,2p+2}\left( \theta \right) r^{2i+2p+2}+\sum\limits_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }c_{2i+1}A_{2i+1,2p+3}\left( \theta \right) r^{2i+2p+3}+ \sum\limits_{i=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2i}A_{2i,2p+3}% \left( \theta \right) r^{2i+2p+2}+\sum\limits_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2i+1}A_{2i+1,2p+4}\left( \theta \right) r^{2i+2p+4}+ \left. \sum\limits_{i=1}^{\mu }\left( A_{2i-2,2p+4}\left( \theta \right) -\frac{2p+3}{2i-1}A_{2i,2p+2}\left( \theta \right) \right) d_{2i-2}r^{2i+2p+1}\right) \times \left( \sum\limits_{k=0}^{\left[ \frac{n_{1}-1}{2}% \right] }a_{2k+1}A_{2k+2,2p}\left( \theta \right) r^{2k+2p+1}+\sum\limits_{k=0}^{% \left[ \frac{n_{1}}{2}\right] }a_{2k}A_{2k+1,2p}\left( \theta \right) r^{2k+2p}+\right. \sum\limits_{k=0}^{\left[ \frac{n_{2}-1}{2% }\right] }b_{2k+1}A_{2k+2,2p+1}\left( \theta \right) r^{2k+2p+2}+\sum\limits_{k=0}^{% \left[ \frac{n_{3}-1}{2}\right] }c_{2k+1}A_{2k+2,2p+2}\left( \theta \right) r^{2k+2p+3}+ \sum\limits_{k=0}^{\left[ \frac{n_{3}}{2}\right] }c_{2k}A_{2k+1,2p+2}\left( \theta \right) r^{2k+2p+2}+\sum\limits_{k=0}^{\left[ \frac{n_{4}-1}{2}\right] }d_{2k+1}A_{2k+2,2p+3}\left( \theta \right) r^{2k+2p+4}+ \left. \sum\limits_{k=1}^{\mu }\left( A_{2k-1,2p+3}\left( \theta \right) -\frac{2p+3}{2k-1}A_{2k+1,2p+1}\left( \theta \right) \right) d_{2k-2}r^{2k+2p+1}\right). \)
To find the explicit expression of \(F_{20}^{2}(r)=\dfrac{1}{2\pi}\int_{0}^{2\pi}F_{2}(r,\theta)d\theta\), we use the Lemma 1. So, we get
\(F_{20}^{2}(r)=\left( \sum\limits_{i=0}^{\left[ \frac{n_{2}}{2}\right] }\bar b_{2i}r^{2i}\xi _{2i,2p+2}\left( 2\pi \right) +\sum\limits_{i=0}^{\left[ \frac{% n_{4}}{2}\right] }\bar d_{2i}r^{2i+2}\xi _{2i,2p+4}\left( 2\pi \right) \right)r^{2p+1}- \sum\limits_{i=0}^{\left[ \frac{n_{1}-1}{2}\right] }\sum\limits_{k=1}^{\mu }a_{2i+1}d_{2k-2}\left( \xi _{2i+2k,4p+4}(2\pi )-\frac{2p+3}{2k-1}\xi _{2i+2k+2,4p+2}(2\pi )\right) r^{2i+2k+4p+1}- \sum\limits_{i=0}^{\left[ \frac{n_{1}}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{2}-1% }{2}\right] }a_{2i}b_{2k+1}\xi _{2i+2k+2,4p+2}(2\pi )r^{2i+2k+4p+1}- \sum\limits_{i=0}^{\left[ \frac{n_{1}}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{4}-1% }{2}\right] }a_{2i}d_{2k+1}\xi _{2i+2k+2,4p+4}(2\pi )r^{2i+2k+4p+3}- \sum\limits_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{1}% }{2}\right] }b_{2i+1}a_{2k}\xi _{2i+2k+2,4p+4}\left( 2\pi \right) r^{2i+2k+4p+1}- \sum\limits_{i=0}^{\left[ \frac{n_{2}-1}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{3}% }{2}\right] }b_{2i+1}c_{2k}\xi _{2i+2k+2,4p+4}\left( 2\pi \right) r^{2i+2k+4p+3}- \sum\limits_{i=0}^{\left[ \frac{n_{3}-1}{2}\right] }\sum\limits_{k=1}^{\mu }c_{2i+1}d_{2k-2}\left( \xi _{2i+2k,4p+6}\left( 2\pi \right) -\frac{2p+3}{% 2k-1}\xi _{2i+2k+2,4p+4}\left( 2\pi \right) \right) r^{2i+2k+4p+3}- \sum\limits_{i=0}^{\left[ \frac{n_{3}}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{2}-1% }{2}\right] }c_{2i}b_{2k+1}\xi _{2i+2k+2,4p+4}\left( 2\pi \right) r^{2i+2k+4p+3}- \sum\limits_{i=0}^{\left[ \frac{n_{3}}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{4}-1% }{2}\right] }c_{2i}d_{2k+1}\xi _{2i+2k+2,4p+6}\left( 2\pi \right) r^{2i+2k+4p+5}- \sum\limits_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{1}% }{2}\right] }d_{2i+1}a_{2k}\xi _{2i+2k+2,4p+4}\left( 2\pi \right) r^{2i+2k+4p+3}- \sum\limits_{i=0}^{\left[ \frac{n_{4}-1}{2}\right] }\sum\limits_{k=0}^{\left[ \frac{n_{3}% }{2}\right] }d_{2i+1}c_{2k}\xi _{2i+2k+2,4p+6}\left( 2\pi \right) r^{2i+2k+4p+5}- \sum\limits_{i=1}^{\mu }\sum\limits_{k=0}^{\left[ \frac{n_{1}-1}{2}\right] }d_{2i-2}a_{2k+1}\left( \xi _{2i+2k,4p+4}\left( 2\pi \right) -\frac{2p+3}{% 2i-1}\xi _{2i+2k+2,4p+2}\left( 2\pi \right) \right) r^{2i+2k+4p+1}- \sum\limits_{i=1}^{\mu }\sum\limits_{k=0}^{\left[ \frac{n_{3}-1}{2}\right] }d_{2i-2}c_{2k+1}\left( \xi _{\substack{ 2i+2k,4p+6 \\ }}\left( 2\pi \right) -\frac{2p+3}{2i-1}\xi _{2i+2k+2,4p+4}\left( 2\pi \right) \right) r^{2i+2k+4p+3} =r^{1+2p}\left(P_{2}(r^{2})+r^{2p}P_{3}(r^{2})\right).\)
Where \(P_{2}(r^{2})\) is a polynomial of degree \[ \max \left\{ \left[ \frac{n_{2}}{2}\right],\left[\frac{n_{4}}{2}\right]+1\right\}, \] and \(P_{3}(r^{2})\) is a polynomial of degree \[ \max \left\{ \left[ \frac{n_{1}}{2}\right] \right. +\left[ \frac{n_{2}-1}{2}\right],% \left[ \frac{n_{1}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right]+1,\left[ \frac{n_{1}-1}{2}\right] +\mu, \] \[ \left. \left[ \frac{n_{2}-1}{2}\right] +% \left[ \frac{n_{3}}{2}\right]+1,\left[ \frac{n_{3}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right]+2,\left[ \frac{n_{3}-1}{2}\right] +\mu+1\right\} . \] Therefore \(F_{20}(r)\) is a polynomial in the variable \(r^{2}\) of the form \[ F_{20}\left( r\right) =F_{20}^{1}\left( r\right) +F_{20}^{2}\left( r\right) =r^{1+2p}\left( r^{2p}P_{1}\left( r^{2}\right) +P_{2}\left( r^{2}\right) +r^{2p}P_{3}\left( r^{2}\right) \right). \] Thus, \(F_{20}(r)\) has at most \[ \max \left\{ \left[ \frac{n_{2}}{2}\right] ,\left[ \frac{n_{4}}{2}\right]+1 ,% \left[ \frac{n_{1}}{2}\right] \right. +\left[ \frac{n_{2}-1}{2}\right] +p,% \left[ \frac{n_{1}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right] +p+1, \left[ \frac{n_{1}-1}{2}\right] +\] \[ \mu +p,\left[ \frac{n_{2}-1}{2}\right] +% \left[ \frac{n_{3}}{2}\right] +p+1,\left[ \frac{n_{3}}{2}\right] +\left[ \frac{n_{4}-1}{2}\right] +p+2, \left. \left[ \frac{n_{3}-1}{2}\right] +\mu +p+1\right\} , \] positive roots. Hence statement (b) of Theorem 1 is proved.

4. Applications

In this section we shall give examples to illustrate statements (a) and (b) of Theorem 1. We consider the first example corresponds to statement (a)

Example 1.

\begin{equation}\label{E1} \left\{ \begin{array}{l} \dot{x}=y \\ \dot{y}=-x-\varepsilon (2xy^{10}+(\frac{256}{231}-\frac{2560}{33}x^{2})y^{11}+x^{3}y^{12}+(\frac{32768}{429}x^{2})y^{13}) ,% \end{array}% \right. \end{equation}
(22)
where \(f^{1}(x), g^{1}(x), h^{1}(x), l^{1}(x)\) have degree \(n_{1}=1, n_{2}=2, n_{3}=3\) and \(n_{4}=2\) respectively. The function of averaging theory of first order is \[ F_{10}(r)=r^{15}-\frac{5}{4}r^{13}+\frac{1}{4}r^{11}, \] which has exactly \(\left[\frac{n_{4}}{2}\right]+1=\) two positive zeros \(r_{1}=\frac{1}{2}, r_{2}=1\). Which satisfy \[ \frac{dF_{10}\left( r\right) }{dr}\left\vert _{r=r_{1}}\right. =-\frac{3}{8192}% \neq 0,\frac{dF_{10}\left( r\right) }{dr}\left\vert _{r=r_{2}}\right. =\frac{3}{2}\neq 0, \] so we conclude that the system (22) has an unstable limit cycle for \(r_{2}=1,\) and a stable limit cycle for \(r_{1}=\frac{1}{2}\).

Example 2. We consider an example that corresponds to statement (b) of Theorem 1

\begin{equation}\label{E2} \left\{ \begin{array}{l} \dot{x}=y \\ \dot{y}=-x- f( x)y^{4} -g( x) y^{5}-l( x) y^{6}-h(x)y^{7} ,% \end{array}% \right. \end{equation}
(23)
where \begin{eqnarray*} f(x) &=&\varepsilon (\frac{341}{321888}+6x)+\varepsilon ^{2}(-1-3x), \\ g(x) &=&\varepsilon (\frac{429184}{675}x+\frac{182773472}{1395}x^{3})+\varepsilon ^{2}(-\frac{1}{4500}+2x^{3}), \\ h(x) &=&\varepsilon (x^{2})+\varepsilon ^{2}(5x-3x^{2}), \\ l(x) &=&\varepsilon (-\frac{155357728}{3069}x)+\varepsilon ^{2}(-x+\frac{22}{1575}), \end{eqnarray*} and \(f(x), g(x), h(x), l(x)\) have degree \(n_{1}=1, n_{2}=3, n_{3}=2\) and \(n_{4}=1\) respectively. Since we have \(F_{10}\) identical to zero, then we must solve the equation \(F_{20}=0\) that is \[ r^{15}-\frac{5269}{3600}r^{13}+\frac{1529}{2880}r^{11}-\frac{341}{4800}r^{9}+\frac{11}{2880}r^{7}-\frac{1}{14400}r^{5} =0.\] Which has exactly \(\left[\frac{n_{2}-1}{2}\right]+\left[\frac{n_{3}}{2}\right]+p+1=\) five positive zeros \(r_{1}=\frac{1}{5},\) \(r_{2}=\frac{1}{4}, \) \(r_{3}=\frac{1}{3},\) \(r_{4}=\frac{1}{2}\) and \(r_{5}=1\). These last roots satisfy \begin{eqnarray*} \frac{dF_{20}}{dr}\left\vert _{r=r_{1}}\right. &=&\frac{252}{6103515625}% \neq 0,\;\; \frac{dF_{20}}{dr}\left\vert _{r=r_{2}}\right. =-\frac{63% }{671088640}\neq 0 \;\;\; \\ \frac{dF_{20}}{dr}\left\vert _{r=r_{3}}\right. &=&\frac{28}{23914845}\neq 0% \;, \;\; \frac{dF_{20}}{dr}\left\vert _{r=r_{4}}\right. =-% \frac{21}{163840}\neq 0 \\ \frac{dF_{20}}{dr}\left\vert _{r=r_{5}}\right. &=&\frac{6}{5}\neq 0. \end{eqnarray*} Therefore we conclude that system (23) has three unstable limit cycles for \(r_{1}= \frac{1}{5},\) \(r_{3}=\frac{1}{3}\) and \(r_{5}=1\). In addition to two others stable limit cycles for \(r_{2}=\frac{1}{4}\) and \(r_{4}=\frac{1}{2}.\)

References

  1. Hilbert, D. (1900). Mathematische problems. lecture in: second international congress of mathematicians-Paris, France Nachrichten von der gesellschaft der wissenschaften zu göttingen, mathematisch-physikalische klasse. English Translation, 5, 253-297.[Google Scholor]
  2. Arnol'd, V. I., & Ilyashenko, Y. S. (1998). Ordinary Differential Equations Encyclopaedia of Mathematical Sciences, Vol. 1. Dynamical Systems I.[Google Scholor]
  3. Giacomini, H., Llibre, J., & Viano, M. (1996). On the nonexistence, existence and uniqueness of limit cycles. Nonlinearity, 9(2), 501-516.[Google Scholor]
  4. Blows, T. R., & Perko, L. M. (1994). Bifurcation of limit cycles from centers and separatrix cycles of planar analytic systems. Siam Review, 36(3), 341-376.[Google Scholor]
  5. Oscillators, N. (1983). Dynamical Systems, and Bifurcations of Vector Fields. New York, Springer, 1986.[Google Scholor]
  6. Buica, A., & Llibre, J. (2004). Averaging methods for finding periodic orbits via Brouwer degree. Bulletin des Sciences Mathematiques, 128(1), 7-22.[Google Scholor]
  7. Verhulst, F. (2006). Nonlinear Differential Equations and Dynamical Systems. Springer Science & Business Media.[Google Scholor]
  8. Llibre, J., & Mereu, A. C. (2011). Limit cycles for generalized Kukles polynomial differential systems. Nonlinear Analysis: Theory, Methods & Applications, 74(4), 1261-1271.[Google Scholor]
  9. Boulfoul, A., Makhlouf, A., & Mellahi, N. (2019). On the limit cycles for a class of generalized Kukles differential systems. Journal of Applied Analysis & Computation, 9(3), 864-883.[Google Scholor]
  10. Chen, T., & Llibre, J. (2019). Limit cycles of a second-order differential equation. Applied Mathematics Letters, 88, 111-117.[Google Scholor]
  11. García, B., Llibre, J., & Del Río, J. S. P. (2014). Limit cycles of generalized Liénard polynomial differential systems via averaging theory. Chaos, Solitons & Fractals, 62, 1-9.[Google Scholor]
  12. Kukles, I. S. (1944). Sur quelques cas de distinction entre un foyer et un centre. In Doklady Akademii Nauk (Vol. 42, No. 42, pp. 208-211).[Google Scholor]
  13. Lloyd, N. G., & Pearson, J. M. (1990). Conditions for a centre and the bifurcation of limit cycles in a class of cubic systems. In Bifurcations of Planar Vector Fields (pp. 230-242). Springer, Berlin, Heidelberg.[Google Scholor]
  14. Sadovskii, A. P. (2003). Cubic systems of nonlinear oscillations with seven limit cycles. Differential Equations, 39(4), 505-516.[Google Scholor]
  15. Llibre, J., Mereu, A. C., & Teixeira, M. A. (2010, March). Limit cycles of the generalized polynomial Liénard differential equations. In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 148, No. 2, pp. 363-383). Cambridge University Press.[Google Scholor]
  16. Sanders, J. A., Verhulst, F., & Murdock, J. (2007). Averaging Methods in Nonlinear Dynamical Systems (Vol. 59, pp. xxii+-431). New York: Springer.[Google Scholor]
]]>
Solution of generalized Abel’s integral equation using orthogonal polynomials https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/solution-of-generalized-abels-integral-equation-using-orthogonal-polynomials/ Fri, 30 Dec 2022 18:17:53 +0000 https://old.pisrt.org/?p=6941
OMA-Vol. 6 (2022), Issue 2, pp. 65 - 73 Open Access Full-Text PDF
Mamman Ojima John, Aboiyar Terhemen and Tivde Tertsegha
Abstract:This research presents the solution of the generalized version of Abel's integral equation, which was computed considering the first and second kinds. First, Abel's integral equation and its generalization were described using fractional calculus, and the properties of Orthogonal polynomials were also described. We then developed a technique of solution for the generalized Abel's integral equation using infinite series of orthogonal polynomials and utilized the numerical method to approximate the generalized Abel's integral equation of the first and second kind, respectively. The Riemann-Liouville fractional operator was used in these examples. Our technique was implemented in MAPLE 17 through some illustrative examples. Absolute errors were estimated. In addition, the occurred errors between using orthogonal polynomials for solving Abel's integral equations of order \(0\ <\ \alpha \ <\ 1\) and the exact solutions show that the orthogonal polynomials used were highly effective, reliable and can be used independently in situations where the exact solution is unknown which the numerical experiments confirmed. ]]>

Open Journal of Mathematical Analysis

Solution of generalized Abel’s integral equation using orthogonal polynomials

Mamman Ojima John\(^{1,*}\), Aboiyar Terhemen\(^{1}\) and Tivde Tertsegha\(^1\)
\(^1\) Department of Mathematics/Statistics/Computer Science, Faculty of Science, Federal University of Agriculture Makurdi, Benue State, Nigeria.
Correspondence should be addressed to Mamman Ojima John at mammanojima@gmail.com

Abstract

This research presents the solution of the generalized version of Abel’s integral equation, which was computed considering the first and second kinds. First, Abel’s integral equation and its generalization were described using fractional calculus, and the properties of Orthogonal polynomials were also described. We then developed a technique of solution for the generalized Abel’s integral equation using infinite series of orthogonal polynomials and utilized the numerical method to approximate the generalized Abel’s integral equation of the first and second kind, respectively. The Riemann-Liouville fractional operator was used in these examples. Our technique was implemented in MAPLE 17 through some illustrative examples. Absolute errors were estimated. In addition, the occurred errors between using orthogonal polynomials for solving Abel’s integral equations of order \(0\ <\ \alpha \ <\ 1\) and the exact solutions show that the orthogonal polynomials used were highly effective, reliable and can be used independently in situations where the exact solution is unknown which the numerical experiments confirmed.

Keywords:

Fractional calculus; Riemann-Liouville; Singular volterra integral equation; Abel’s integral; Orthogonal polynomials; Method of collocation.

1. Introduction

Fractional calculus is a branch of mathematical analysis that investigates integrals and derivatives of fractional real and complex order with their applications. Abel's integral equation was investigated and developed by Niels Henrik Abel when he was solving and generalizing the Tautochrone problem. It enables users to compute the total time required for a particle to fall along a given curve [1]. Abel integral equations have been generalized through the theory of fractional integral equations.

Recent times, literature shows large number of engineering and scientific research involving fractional order calculus (foc), the simple reason is that it provides an accurate models of systems being considered [2]. The history [3] which is well established that several real life phenomena can not find adequate representation in the regular integer order calculus but are better described by fractional order calculus, with this, several approaches have been adopted to solve Abel's integral equations [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] which he generalized. Progress [20, 21] however has been made with respect to interpretation of non-integer order integral and derivative. Several numerical methods [4, 5, 7, 8, 10] for the solutions of this integral equation have been developed such as solutions in Distribution, Banach Spaces and a host of others with applications which are commonly found in modeling the dynamics of interfaces between nanoparticles and substrates [22], oscillation [23], signal processing [24], frequency dependent damping behavior of many viscoelastic materials [25], continuum and statistical mechanics [26], and economics [27].

The ideas of fractional integral operators to solve a system of generalized Abel integral equations can be found in [28], numerous attempts to solve Abel's equations involving different types of operators can be found in the literature.

2. Materials and methods

This section, presents some definitions and mathematical preliminaries which are necessary for the evaluation of the fractional calculus [30] which will be used in this paper. The Riemann--Liouville fractional integral of order \(\alpha\) can be defined mathematically as:
\begin{equation} \label{GrindEQ__1_} J^{\alpha }u\left(x\right)=\frac{1}{\Gamma(\alpha )}\int^x_0{(x-\tau )^{\alpha -1}}u\left(\tau \right)d\tau ,\ \ x>0,\ \ \alpha >0.\tag{1} \end{equation}
The two forms of expressing Abel's Integral equations are the first and the second kind:
\begin{align} \label{GrindEQ__2_} f\left(x\right)&=\int^x_0{\frac{1}{\sqrt{(x-t)}}u(t)dt},&\tag{2}\\ \label{GrindEQ__3_} u\left(x\right)&=f\left(x\right)+\int^x_0{\frac{1}{\sqrt{(x-t)}}u(t)dt},\tag{3}&\\ \label{GrindEQ__4_} f\left(x\right)&=\int^x_0{\frac{u(t)}{{(x-t)}^{\alpha }}dt} , & 0< \alpha < 1,\tag{4}\\ \label{GrindEQ__5_} u\left(x\right)&=f\left(x\right)+\int^x_0{\frac{u(t)}{{(x-t)}^{\alpha }}dt}, & 0< \alpha < 1,\tag{5} \end{align}
where Eqs \eqref{GrindEQ__4_} and \eqref{GrindEQ__5_} are called generalized Abel's integral equation and weakly singular integral equations respectively.

Proposition 1.[29] For \(J^{\alpha }\), the following properties holds, \(u_{i} \in c_{\mu}\), \(i=0\dots n,\) \(\mu\ge -1\) is defined as:

  1. \(J^{\alpha }\left(\sum\limits^n_{i=0}{u_i\left(x\right)}\right)=\sum\limits^n_{i=0}{J^{\alpha }u_i\left(x\right)},\)
  2. \(J^{\alpha }x^{\beta }=\frac{\Gamma(\beta +1)}{\Gamma(\alpha +\beta +1)}x^{\alpha +\beta }, \beta \ >\ -1.\)

2.1. Legendre polynomials

The following isLegendre's differential equation in mathematics,
\begin{equation} \label{GrindEQ__6_} \frac{d}{dx}\left[\left(1-x^2\right)\frac{d}{dx}P_n\left(x\right)\right]+n\left(n+1\right)P_n\left(x\right)=0,\tag{6} \end{equation}
witch is named after Adrien-Marie Legendre. This common differential equation can be found in physics and other branches of science. It occurs when Laplace's equation (and related partial differential equations) are solved in spherical coordinates. Orthogonal polynomials are widely utilized in mathematics, engineering, computer science, and mathematical physics applications. Legendre polynomials are a type of orthogonal polynomial that is often used. The recurrence formula is satisfied by the Legendre polynomials \(P_n\) satisfy the recurrence formula:
\begin{equation} \label{GrindEQ__7_} \left. \begin{array}{c} P_0\left(x\right)=1 \\ P_1\left(x\right)=x \\ \left(n+1\right)P_{n+1}\left(x\right)=\left(2n+1\right)xP_n\left(x\right)-nP_{n-1}\left(x\right) \end{array} \right\}\tag{7} \end{equation}
The interval [-1,1], which is orthogonal to the \(L^2\) inner product, is an elegant property of Legendre polynomials.
\begin{equation} \label{GrindEQ__8_} \int^1_{-1}{P_n\left(x\right)P_m\left(x\right)dx=\frac{2}{2n+1}}{\delta }_{nm}\,,\tag{8} \end{equation}
where \({\delta }_{nm}\) stands for the Kronecker delta, which is \(1\) if \(m = n\) and \(0\) otherwise. On the interval \([-1,1]\), the polynomials constitute a complete set, and any piecewise smooth function can be expanded in a sequence of them.

2.2. Chebyshev polynomials

Pafnuty Chebyshev introduced Chebyshev polynomials, which was named after him are a sequence of polynomials similar to the trigonometric multi-angle formulae. The Chebyshev differential equation is written as
\begin{equation} \label{GrindEQ__9_} \left(1-x^2\right)\frac{d^2y}{dx^2}-x\frac{dy}{dx}+n^2y=0.\tag{9} \end{equation}
The first kind of Chebyshev polynomials are a set of orthogonal polynomials defined to be the solutions to the Chebyshev differential equation and denoted by \(T_n(x).\) They are a special case of the Gegenbauer polynomial with \(\alpha =0\). They are also intimately related with trigonometric multiple-angle formulas. They are normalized such that \(T_n(1)\)=1. Chebyshev polynomials of the first kind \(T_n(t)\) is defined to be the unique polynomials in \(x\) of degree \(n,\) defined by the relation \(T_n(t)\ =\ cos(n\theta )\ =\ cos(n\ arccos\ t),\) where \[t = cos\theta (0\le \theta \le \pi ),\] is a polynomial of degree \(n(n\ =\ 0,\ 1,\ 2,\ ...).\ T_n\) is called Chebyshev polynomial of degree \(n\). When \(\theta \) increases from \(0\) to \(\pi\), \(t\) decreases from \(1\) to \(-1\). Then the interval \([-1,1]\) is domain of definition of \(T_n(t)\). In addition, successive Chebyshev polynomials can be gotten by the following recursive relation
\begin{equation} \label{GrindEQ__10_} \left. \begin{array}{c} T_0(x)\ =\ 1 \\ T_1(x)\ =\ x \\ T_n\left(x\right)=\ 2xT_{n-1}\left(x\right)-\ T_{n-2}(x) \end{array} \right\}\tag{10} \end{equation}
Then the inner product is given by
\begin{equation} \label{GrindEQ__11_} < T_{i},T_{j}>=\int^{1}_{-1}{T_i\left(x\right)T_{j}\left(x\right)w\left(x\right)dx}\,,\tag{11} \end{equation}
where \(w\left(x\right)={(1-x^2)}^{-\frac{1}{2}}\). The Chebyshev polynomials are orthogonal, with respect to the inner product: \(< T_i,T_j>=\pi {\delta }_{ij}\).

2.3. Hermite polynomials

The Hermite polynomials, named after Charles Hermite are solutions of Hermite equation,
\begin{equation} \label{GrindEQ__13_} y^{''}-2xy^{'}+\lambda y=0\,,\tag{12} \end{equation}
which is a second-order ordinary differential equation. Hermite polynomial is one of the most common set of orthogonal polynomials. The successive Hermite polynomials satisfy the recurrence relations
\begin{equation} \label{GrindEQ__14_} \left. \begin{array}{c} H_0\left(x\right)=1 \\ H_1\left(x\right)=2x \\ H_{n+1}\left(x\right)=2xH_n\left(x\right)-2nH_{n-1}\left(x\right) \end{array} \right\}\tag{13} \end{equation}
They are orthogonal in the range (-\(\infty ,+\infty \)) with respect to the weighting function \( e^{{-x}^2}\)
\begin{equation} \label{GrindEQ__15_} \int^{\infty }_{-\infty }{H_m\left(x\right)H_n\left(x\right)e^{{-x}^2}dx} ={\delta }_{mn}2^nn!\sqrt{\pi }.\tag{14} \end{equation}
In the case of the Legendre, Chebyshev and Hermite polynomials, their orthogonality is with respect to possession of some standard qualities such as recurrence relation, having second-order linear differential equation and with respect to an inner product among others.

2.4. Abel's integral equations of the first kind

Here, we consider the use of Legendre, Chebyshev and Hermite series for solving Abel's integral equations. The derivation of the method for the first kind is given below: Considering Eqs \eqref{GrindEQ__1_} and \eqref{GrindEQ__4_}, we have
\begin{equation} \label{GrindEQ__16_} \Gamma\left(1-\alpha \right)J^{1-\alpha }u\left(x\right)=\int^x_0{(x-\tau )^{-\alpha }}u\left(\tau \right)d\tau =\int^x_0{\frac{u(t)}{{(x-t)}^{\alpha }}dt}\,.\tag{15} \end{equation}
We can therefore rewrite Abel's integral equation of the first kind as follows,
\begin{equation} \label{GrindEQ__17_} f\left(x\right)=\Gamma\left(1-\alpha \right)J^{1-\alpha }u\left(x\right)\,.\tag{16} \end{equation}
We recommend utilizing orthogonal polynomials to approximate \(J^{1-\alpha }u\left(x\right)\) because calculating \(J^{1-\alpha }u\left(x\right)\) is challenging, we therefore suggest using the orthogonal polynomials to approximate \(u\left(x\right)\). \(u\left(x\right)\) can be written as an infinite series of Chebyshev, Legendre and Hermite basis.
\begin{equation} \label{GrindEQ__18_} u\left(x\right)=\sum^{\infty }_{i=0}{a_i{\phi }_i\left(x\right)}\,,\tag{17} \end{equation}
where \({\phi }_i\left(x\right)\) is the Legendre, Chebyshev or Hermite polynomials of degree \(i\). As a result, we can get the interval \([a,\ b]\) by making a proper variable modification. With these, we can write \( u\left(x\right)\) as a truncated orthogonal series.
\begin{equation} \label{GrindEQ__19_} u_n\left(x\right)=\sum^n_{i=0}{a_i{\phi }_i\left(x\right).}\tag{18} \end{equation}
Given that \(u_n\left(x\right)\) will be an approximation of Abel's integral equation's solution. Now we can write \eqref{GrindEQ__17_} as follows,
\begin{equation} \label{GrindEQ__20_} f\left(x\right)=\Gamma\left(1-\alpha \right)\left(\sum^n_{i=0}{a_iJ^{1-\alpha }{\phi }_i\left(x\right)}\right).\tag{19} \end{equation}
Using the fractional integral's linear combination property in accordance with Proposition 1, we obtain \(J^{1-\alpha }{\phi }_i\) is all that is required. As a result, we considered
\begin{equation} \label{GrindEQ__21_} {\phi }_i\left(x\right)=\sum^i_{k=0}{b_k}x^k\,,\tag{20} \end{equation}
where \(b_k\), \(k=0\dots i\) are coefficients of orthogonal polynomial of degree \(i\) that are defined by \eqref{GrindEQ__7_}, \eqref{GrindEQ__10_} and \eqref{GrindEQ__14_}. Now by multiplying \eqref{GrindEQ__21_} by \(J^{1-\alpha }\), we have
\begin{equation} \label{GrindEQ__22_} {J^{1-\alpha }\phi }_i\left(x\right)=\sum^i_{k=0}{b_kJ^{1-\alpha }}x^k.\tag{21} \end{equation}
So, substituting \eqref{GrindEQ__22_} in \eqref{GrindEQ__20_} gives the following form,
\begin{equation} \label{GrindEQ__23_} f\left(x\right)=\Gamma\left(1-\alpha \right)\sum^n_{i=0}{a_i\sum^i_{k=0}{b_kJ^{1-\alpha }}x^k}.\tag{22} \end{equation}
To improve the efficiency of this strategy, we can reorganize orthogonal series as follows,
\begin{equation} \label{GrindEQ__24_} \sum^n_{i=0}{a_i{\phi }_i\left(x\right)=}\sum^n_{i=0}{c_i}x^i\,.\tag{23} \end{equation}
Here, \(c_i\) is the result of a linear combination of \(a_i\), \(i=0,1,\dots ,\ n\). As a result, considering \eqref{GrindEQ__24_} and \eqref{GrindEQ__20_} yields the following results,
\begin{equation} \label{GrindEQ__25_} f\left(x\right)=\Gamma\left(1-\alpha \right)J^{1-\alpha }\left(\sum^n_{i=0}{c_i}x^i\right).\tag{24} \end{equation}
We can achieve the following result by applying the linear combination property of fractional integral according to Proposition 1.
\begin{equation} \label{GrindEQ__26_} f\left(x\right)=\Gamma\left(1-\alpha \right)\left(\sum^n_{i=0}{c_iJ^{1-\alpha }}x^i\right)\,.\tag{25} \end{equation}
Furthermore, by replacing the roots of an orthogonal polynomial of degree \(n+1\) as collocation points in \eqref{GrindEQ__26_}, these leads to the formation of a system of linear equations, which when solved yields the needed solution of Abel's integral equation as a truncated orthogonal series in \eqref{GrindEQ__19_}. The transformation in \eqref{GrindEQ__25_} reduces the number of times the term \(J^{1-\alpha }x^i\) is computed from \(n^2\) to \(n\).

2.5. Abel's integral equations of the second kind

In a similar fashion, we derive for the second kind as follows: We can rewrite \eqref{GrindEQ__5_} by considering \eqref{GrindEQ__1_} as
\begin{equation} \label{GrindEQ__27_} u\left(x\right)=f\left(x\right)+\Gamma\left(1-\alpha \right)J^{1-\alpha }u\left(x\right)\,.\tag{26} \end{equation}
Also by substituting \eqref{GrindEQ__19_} in \eqref{GrindEQ__27_}, we have
\begin{equation} \label{GrindEQ__28_} \sum^n_{i=0}{a_i{\phi }_i\left(x\right)}=f\left(x\right)+\Gamma\left(1-\alpha \right)\sum^n_{i=0}{a_i{J^{1-\alpha }\phi }_i\left(x\right).}\tag{27} \end{equation}
From \eqref{GrindEQ__26_} and \eqref{GrindEQ__28_}, after computing \(J^{1-\alpha }x^i\) and substituting the collocation points we have a system of linear equations, which when solved gives the coefficients of the approximated solution of Abel's integral equation.

3. Results and discussion

Example 1. The Abel's integral equation of the first kind is \[\int^x_0{\frac{u(t)}{\sqrt{(x-t)}}dt}=e^x-1\,,\] with exact solution \(\frac{1}{\sqrt{}\pi }e^x{\mathrm{erf} \left(\sqrt{x}\right)\ },\) where \({\mathrm{erf} \left(x\right)\ }\) is error function defined by \[{\mathrm{erf} \left(x\right)\ }=\frac{2}{\sqrt{\pi }}\int^x_0{e^{-{\varphi }^2d\varphi }}.\] From Eq. \eqref{GrindEQ__26_}, we have \[f\left(x\right)=\Gamma\left(1-\alpha \right)\left(\sum^n_{i=0}{a_iJ^{1-\alpha }}{\phi }_i\left(x\right)\right)\,.\] Here \({\phi }_i\left(x\right)\) is the Legendre, Chebyshev or Hermite polynomials of degree \(i\) as defined in \eqref{GrindEQ__18_} and \eqref{GrindEQ__19_}, we can solve for each polynomial by expressing them in terms of their recurrence relation defined earlier. With the aid of MAPLE 17 we have our numerical solution in Tables 1 and 2.

Example 2. The Abel's integral equation of second kind is, \[U\left(x\right)=2\sqrt{x}-\int^x_0{\frac{u\left(t\right)}{\sqrt{\left(x-t\right)}}dt}\,.\] The exact solution is \(1-e^{\pi x}erfc(\sqrt{\pi x})\), where \(erfc(\sqrt{\pi x})\) is a complementary error function defined as \[erfc\left(x\right)=\frac{2}{\sqrt{\pi }}\int^{\infty }_x{e^{-{\varphi }^2d\varphi }}.\] From Eq. \eqref{GrindEQ__28_} \[\sum^n_{i=0}{a_i\{}{\phi }_i\left(x\right)-\Gamma\left(1-\alpha \right)\sum^n_{i=0}{J^{1-\alpha }{\phi }_i\left(x\right)}\}=f\left(x\right)\,.\] The numerical results for this example are presented in Tables 3 and 4.

Table 1. Approximate solution, absolute errors and exact values of Example 1 for \(n=10\).
\(\boldsymbol{x}\) (step size) Exact Value Legendre Polynomial Abs. Error Chebyshev Polynomial Abs. Error
0.1 0.2152905021 0.2139150700 0.0014 0.2139151530 0.0014
0.2 0.3258840762 0.3256471202 2.3696\(\times {10}^{-4}\) 0.3256472151 2.3686\(\times {10}^{-4}\)
0.3 0.4275656577 0.4274217109 1.4395\(\times {10}^{-4}\) 0.4274218095 1.4385\(\times {10}^{-4}\)
0.4 0.5293330732 0.5292480324 8.5041\(\times {10}^{-5}\) 0.5292481234 8.4950\(\times {10}^{-5}\)
0.5 0.6350318720 0.6349698900 6.1982\(\times {10}^{-5}\) 0.6349699660 6.1906\(\times {10}^{-5}\)
0.6 0.7470401733 0.7469949317 4.5242\(\times {10}^{-5}\) 0.7469949759 4.5197\(\times {10}^{-5}\)
0.7 0.8671875858 0.8671507681 3.6818\(\times {10}^{-5}\) 0.8671507891 3.6797\(\times {10}^{-5}\)
0.8 0.9970893764 0.997061161 2.8215\(\times {10}^{-5}\) 0.997061163 2.8213\(\times {10}^{-5}\)
0.9 1.138298578 1.138271221 2.7357\(\times {10}^{-5}\) 1.138271301 2.7277\(\times {10}^{-5}\)
1.0 1.292388093 1.29237800 1.0093\(\times {10}^{-5}\) 1.29237801 1.0083\(\times {10}^{-5}\)
Table 2. Approximate solution, absolute errors and exact values of Example 1 for \(n=10\).
\(\boldsymbol{x}\)(step size) \textbf{Exact Value} \textbf{Hermite Polynomial} \textbf{Abs. Error}
0.1 0.2152905021 0.2139107224 0.0014
0.2 0.3258840762 0.3256418254 2.4225\(\times {10}^{-4}\)
0.3 0.4275656577 0.4274153293 1.5033\(\times {10}^{-4}\)
0.4 0.5293330732 0.5292405512 9.2522\(\times {10}^{-5}\)
0.5 0.6350318720 0.6349614560 7.0416\(\times {10}^{-5}\)
0.6 0.7470401733 0.7469858896 5.4284\(\times {10}^{-5}\)
0.7 0.8671875858 0.8671416940 4.5892\(\times {10}^{-5}\)
0.8 0.9970893764 0.9970529248 3.6452\(\times {10}^{-5}\)
0.9 1.138298578 1.138264975 3.3603\(\times {10}^{-5}\)
1.0 1.292388093 1.292375400 1.2693\(\times {10}^{-5}\)
Table 3. Approximate solution, absolute errors and exact values of Example 2 for \(n=10\).
\(\boldsymbol{x}\)(step size) Exact Value Legendre Polynomial Abs. Error Chebyshev Polynomial Abs. Error
0.1 0.4140591693 0.4102961743 0.0038 0.4102960583 0.0038
0.2 0.5083515180 0.5066321249 0.0017 0.5066320640 0.0017
0.3 0.5643086686 0.5631930727 0.0011 0.5631930909 0.0011
0.4 0.6033472169 0.6025423305 8.0489\(\times {10}^{-4}\) 0.6025424464 8.0477\(\times {10}^{-4}\)
0.5 0.6328679763 0.6322458378 6.2214\(\times {10}^{-4}\) 0.6322460625 6.2191\(\times {10}^{-4}\)
0.6 0.6563234564 0.6558226175 5.0084\(\times {10}^{-4}\) 0.6558229644 5.0049\(\times {10}^{-4}\)
0.7 0.6756010623 0.6751849583 4.1610\(\times {10}^{-4}\) 0.6751854239 4.1564\(\times {10}^{-4}\)
0.8 0.6918419681 0.691489126 3.5284\(\times {10}^{-4}\) 0.6914896144 3.5235\(\times {10}^{-4}\)
0.9 0.7057865180 0.705480729 3.0579\(\times {10}^{-4}\) 0.7054813344 3.0518\(\times {10}^{-4}\)
1.0 0.7179408238 0.71767526 2.6556\(\times {10}^{-4}\) 0.7176760000 2.6482\(\times {10}^{-4}\)
Table 4. Approximate solution, absolute errors and exact values of Example 2 for \(n=10\).
\(\boldsymbol{x}\)(step size) Exact Value Hermite Polynomial Abs. Error
0.1 0.4140591693 0.4103001204 0.0038
0.2 0.5083515180 0.5066359852 0.0017
0.3 0.5643086686 0.5631970712 0.0011
0.4 0.6033472169 0.6025466640 8.0055\(\times {10}^{-4}\)
0.5 0.6328679763 0.6322506550 6.1732\(\times {10}^{-4}\)
0.6 0.6563234564 0.6558280064 4.9545\(\times {10}^{-4}\)
0.7 0.6756010623 0.6751909250 4.1014\(\times {10}^{-4}\)
0.8 0.6918419681 0.6914954448 3.4652\(\times {10}^{-4}\)
0.9 0.7057865180 0.7054873805 2.9914\(\times {10}^{-4}\)
1.0 0.7179408238 0.7176818000 2.5902\(\times {10}^{-4}\)

Figure 1. Numerical and exact solutions of generalized Abel’s integral equation of  Example 1  with \(\alpha\) = 0.5.

Figure 2. Error for generalized Abel’s integral equation for  Example 1 with \(\alpha \) = \(\frac{1}{2}\).

Figure 3. Numerical and exact solutions of generalized Abel’s integral equation of  Example 2 with \(\alpha \) = \(\frac{1}{2}\).

Figure 4. Error for generalized Abel’s integral equation for  Example 2  with \(\alpha \) = \(\frac{1}{2}\).

4. Conclusion

In this research work, we implemented a method which was based on the use of infinite series of orthogonal polynomials to approximate the solution of the generalized Abel's integral equation which was generalized with the aid of fractional calculus. Furthermore, we described the properties of Legendre, Chebyshev and Hermite polynomials. We discussed and illustrated the numerical solutions of generalized Abel's integral equations using orthogonal polynomials. The efficiency of using orthogonal polynomials was illustrated by solving several examples of Abel's integral equations. Orthogonal polynomials were successfully applied to solve Abel's equations of order \(0\ < \ \alpha \ < \ 1\). All ideas were illustrated to be efficient in applying the proposed technique to several examples of that order. We found that the method is accurate and efficient in finding numerical solutions for those equations. More-over, using orthogonal polynomials demonstrate excellent approximations in comparison with the exact solutions and with other methods and solvers through the applicable domain. In addition, the occurred errors between using orthogonal polynomials for those equations of that orders and the exact solutions are small.

Acknowledgments

The authors would like to thank the referee for his/her valuable comments that resulted in the present improved version of the article.

Conflicts of Interest:

''The author declares no conflict of interest.''

References

  1. Podlubny, I., Magin, R. L., & Trymorush, I. (2017). Niels Henrik Abel and the birth of fractional calculus. Fractional Calculus and Applied Analysis, 20(5), 1068-1075.[Google Scholor]
  2. Mamman, J. O., & Aboiyar, T. (2020). A numerical calculation of arbitrary integrals of functions. Advanced Journal of Graduate Research, 7(1), 11-17.[Google Scholor]
  3. Machado, J. T., & Kiryakova, V. (2017). The chronicles of fractional calculus. Fractional Calculus and Applied Analysis, 20(2), 307-336.[Google Scholor]
  4. Zarei, E., & Noeiaghdam, S. (2018). Solving generalized Abel's integral equations of the first and second kinds via Taylor-collocation method. arXiv preprint arXiv:1804.08571.[Google Scholor]
  5. Li, C., Humphries, T., & Plowman, H. (2018). Solutions to Abel's integral equations in distributions. Axioms, 7(3), Article No. 66.[Google Scholor]
  6. Li, C., & Srivastava, H. M. (2021). Uniqueness of solutions of the generalized Abel integral equations in Banach spaces. Fractal and Fractional, 5(3), Article No. 105.[Google Scholor]
  7. Li, C., & Plowman, H. (2019). Solutions of the generalized Abel's integral equations of the second kind with variable coefficients. Axioms, 8(4), Article No. 137. [Google Scholor]
  8. Kaewnimit, K., Wannalookkhee, F., Nonlaopon, K., & Orankitjaroen, S. (2021). The solutions of some Riemann-Liouville fractional integral equations. Fractal and Fractional, 5(4), Article no. 154. [Google Scholor]
  9. Deutsch, M., Notea, A., & Pal, D. (1990). Inversion of Abel's integral equation and its application to NDT by X-ray radiography. NDT international, 23(1), 32-38.[Google Scholor]
  10. Chakrabarti, A., & George, A. J. (1994). A formula for the solution of general Abel integral equation. Applied Mathematics Letters, 7(2), 87-90.[Google Scholor]
  11. Hilfer, R., & Luchko, Y. (2019). Desiderata for fractional derivatives and integrals. Mathematics, 7(2), Article No. 149. [Google Scholor]
  12. Garrappa, R., Kaslik, E., & Popolizio, M. (2019). Evaluation of fractional integrals and derivatives of elementary functions: Overview and tutorial. Mathematics, 7(5), Article no. 407. [Google Scholor]
  13. Kiryakova, V. (2017, December). Use of fractional calculus to evaluate some improper integrals of special functions. In AIP Conference Proceedings (Vol. 1910, No. 1, p. 050012). AIP Publishing LLC.[Google Scholor]
  14. Agarwal, P. (2013). Fractional integration of the product of two multivariables H-function and a general class of polynomials. In Advances in Applied Mathematics and Approximation Theory (pp. 359-374). Springer, New York, NY.[Google Scholor]
  15. s
  16. Kochubei, A., & Luchko, Y. (Eds.). (2019). Basic Theory. Walter de Gruyter GmbH & Co KG.[Google Scholor]
  17. Tenreiro Machado, J. A., Kiryakova, V., Mainardi, F., & Momani, S. (2018). Fractional calculus's adventures in Wonderland (Round table held at ICFDA 2018). Fractional Calculus and Applied Analysis, 21(5), 1151-1155.[Google Scholor]
  18. Fawang, L., Mark M, M., Shaher, M., Nikolai N, L., Wen, C., & Om P, A. (2010). Fractional differential equations. International Journal of Differential Equations, 2010, Article ID 215856.[Google Scholor]
  19. Ghosh, U., Sarkar, S., & Das, S. (2015). Solution of system of linear fractional differential equations with modified derivative of Jumarie type. American Journal of Mathematical Analysis, 3(3), 72-84.[Google Scholor]
  20. Razzaghi, M. (2018). A numerical scheme for problems in fractional calculus. In ITM Web of Conferences (Vol. 20, p. 02001). EDP Sciences.[Google Scholor]
  21. E Tarasov, V., & S Tarasova, S. (2019). Probabilistic interpretation of Kober fractional integral of non-integer order. Progress in Fractional Differentiation & Applications, 5(1), 1-5.[Google Scholor]
  22. Podlubny, I. (2001). Geometric and physical interpretation of fractional integration and fractional differentiation. Fractional Calculus and Applied Analysis, 5(4), 367-386.[Google Scholor]
  23. Chow, T. S. (2005). Fractional dynamics of interfaces between soft-nanoparticles and rough substrates. Physics Letters A, 342(1-2), 148-155.[Google Scholor]
  24. Feng, Q. (2019). Oscillation for a class of fractional differential equation. Journal of Applied Mathematics and Physics, 7(7), 1429.[Google Scholor]
  25. Panda, R., & Dash, M. (2006). Fractional generalized splines and signal processing. Signal Processing, 86(9), 2340-2350.[Google Scholor]
  26. Bagley, R. L., & Torvik, P. J. (1983). A theoretical basis for the application of fractional calculus to viscoelasticity. Journal of Rheology, 27(3), 201-210.[Google Scholor]
  27. Mainardi, F. (1997). Fractional calculus. In Fractals and Fractional Calculus in Continuum Mechanics (pp. 291-348). Springer, Vienna.[Google Scholor]
  28. Baillie, R. T. (1996). Long memory processes and fractional integration in econometrics. Journal of Econometrics, 73(1), 5-59.[Google Scholor]
  29. Gong, C., Bao, W., Tang, G., Jiang, Y., & Liu, J. (2015). Computational challenge of fractional differential equations and the potential solutions: a survey. Mathematical Problems in Engineering, 2015, Article ID 258265. [Google Scholor]
  30. Avazzadeh, Z., Shafiee, B., & Loghmani, G. B. (2011). Fractional calculus for solving Abel's integral equations using Chebyshev polynomials. Applied Mathematical Sciences, 5(45), 2207-2216.[Google Scholor]
  31. Diethelm, K., Ford, N. J., & Freed, A. D. (2004). Detailed error analysis for a fractional Adams method. Numerical Algorithms, 36(1), 31-52.[Google Scholor]
]]>
Stabilities of non-standard Euler-Maruyama scheme’s for Vasicek and geometric brownian motion models https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/stabilities-of-non-standard-euler-maruyama-schemes-for-vasicek-and-geometric-brownian-motion-models/ Fri, 30 Dec 2022 18:07:59 +0000 https://old.pisrt.org/?p=6939
OMA-Vol. 6 (2022), Issue 2, pp. 51 - 64 Open Access Full-Text PDF
Badibi O. Christopher, Ramadhani I., Ndondo M. Apollinaire and Kumwimba S. Didier
Abstract:Stochastic differential equations (SDEs) are a powerful tool for modeling certain random trajectories of diffusion phenomena in the physical, ecological, economic, and management sciences. However, except in some cases, it is generally impossible to find an explicit solution to these equations. In this case, the numerical approach is the only favorable possibility to find an approximative solution. In this paper, we present the mean and mean-square stability of the Non-standard Euler-Maruyama numerical scheme using the Vasicek and geometric Brownian motion models. ]]>

Open Journal of Mathematical Analysis

Stabilities of non-standard Euler-Maruyama scheme’s for Vasicek and geometric brownian motion models

Badibi O. Christopher\(^{1,*}\), Ramadhani I.\(^{2}\), Ndondo M. Apollinaire\(^{1}\) and Kumwimba S. Didier\(^1\)
\(^1\) Département de Mathématiques et Informatique (RDC), Faculté des Sciences, Université de Lubumbashi, Democratic Republic of the Congo.
\(^2\) Département de Mathématiques, Informatique et Statistiques(RDC), Faculté des Sciences et Technologies, Université de Kinshasa, Democratic Republic of the Congo.
Correspondence should be addressed to Badibi O. Christopher at christopheromak2014@gmail.com

Abstract

Stochastic differential equations (SDEs) are a powerful tool for modeling certain random trajectories of diffusion phenomena in the physical, ecological, economic, and management sciences. However, except in some cases, it is generally impossible to find an explicit solution to these equations. In this case, the numerical approach is the only favorable possibility to find an approximative solution. In this paper, we present the mean and mean-square stability of the Non-standard Euler-Maruyama numerical scheme using the Vasicek and geometric Brownian motion models.

Keywords:

Brownian motion; Stochastic differential equations; Stabilities; Non-standard Euler-Maruyama scheme; Vasicek and geometric brownian motion.

1. Introduction

In order to construct continuous and strongly Markovian processes whose generators are second-order differential operators called diffusions [1, 2], Ito developed stochastic differential equations, which are considered random perturbations added to ordinary differential equations or integral equations in which integrals are involved with respect to a Brownian motion. However, in general, finding explicit solutions for stochastic differential equations (SDEs), except in cases where diffusion and drift coefficients are linear, seems difficult or impossible [3].

This is why the numerical approach is relevant because there are numerical methods to predict in advance the qualitative behavior of solutions of stochastic differential equations such as stability. In this paper, we apply the approach described in [1, 4] to analyze the stability of the Vasicek and geometric Brownian motion models using the non-standard Euler-Maruyama scheme. At first glance, we present some basic concepts on the non-standard finite difference scheme of ordinary differential equations, the classical Euler-Maruyama scheme, and the non-standard scheme of SDEs.

2. Preliminaries notions

In this section, we present some important tools in connection with stochastic differential equations, stabilities, and numerical schemes, such as the Non-standard finite difference scheme, the Euler-Maruyama scheme, and the Non-standard Euler-Maruyama scheme.

2.1. Stochastic differential equation and stabilities

This section presents some definitions in connection with stochastic differential equations and the stabilities of solutions of SDEs.

Definition 1.(Stochastic differential equation (SDE), [5]) Let \(\left(\Omega,\mathcal{F},\left(\mathcal{F}_t\right)_{t\geq 0},\mathbb{P} \right)\) be a filtered probability space, \(\left(B_t\right)_{t\geq 0}\) a standard Brownian motion on \(\mathbb{R}^d\) defines in a filtered probability space. A stochastic differential equation (SDE) on \(\mathbb{R}^d \) is an equation of the form:

\begin{equation}\tag{1} \left\{ \begin{array}{ll} \text{d}X_t&= b\left(t,X_t\right)\text{ d}t+ \sigma \left(t,X_t\right)\text{ d}B_t,\\ X\left(o\right)&= X_o. \end{array} \right. \label{intelligent} \end{equation}
with the drift coefficient: \[b\left(t,X_t\right)\in \left[ 0,T\right] \times \mathbb{R}^n \longrightarrow \mathbb{R}^n\,,\] and the diffusion: \[\sigma \left(t,X_t\right)\in \left[ 0,T\right] \times \mathbb{R}^n \longrightarrow \mathbb{R}^{n\times d}\,,\] when \(X_o\) is random variable independent of \(\left(B_t\right)_{t\geq 0}\).

Remark 1.

  1. The white noise \( \sigma \left(t,X_t\right) \) can be additive, it does not influence the state of the system.
  2. The white noise \( \sigma \left(t,X_t\right) \) can be multiplicative, it's influence the state of the system.

Theorem 1. (Existence and uniqueness [6]) We assume that there is a positive constant \( K \) such that \( \forall \quad t\geq 0\), \(X,Y \in \mathbb{R}^d\)

  1. Lipschitz condition: \[ |b\left(t,X\right)-b\left(t,Y\right)|+ |\sigma\left(t,X\right)-\sigma\left(t,Y\right)|\leq K|X-Y|.\]
  2. Linear growth condition: \[|b\left(t,X\right)|\leq K\left(1+|X|\right),\] \[|\sigma \left(t,X\right)|\leq K\left(1+|X|\right).\]
So the SDE \( \eqref{intelligent}\) admits, for any initial condition \(X_o\) of square integrable \(\left(E\left[|X_o|^2\right]< n\infty\right)\) the strong solution \(\left(X_t\right)_{t\in \left[0,T\right]}\), unique, almost surely continuous and satisfying the following condition: \[ E\left(\underset{0\leq t\leq T}{\text{Sup}}|X_t^2|\right)<\infty\,.\]

According to [6] there exists one and only one solution for the Eq. \eqref{intelligent}, verifying the Lipschitz conditions.

Definition 2. (Asymptotic stability in probability in large sense [7]) The solution is said to be asymptotically and stochastically stable in the large sense if \[\forall \quad X_o \in \mathcal{L}^2_{\mathcal{F}_t}\left(\left[-T,0\right],\mathbb{R}^n\right),\] then \[ \mathbb{P}\left\lbrace \underset{t\longrightarrow\infty}{\lim}X\left(t\right)=0\right\rbrace=1.\]

Definition 3. (Stability of \(p^{th}\) moment [1])

  1. Let \(p\geq 2\) we say that a solution of \(\eqref{intelligent}\) is stable in \(p^{th}\) moment if \(\forall \epsilon>0\) it exists \(\delta>0\) such as \[ E\left[ \underset{t>0}{\text{Sup}}|X\left(t\right)|^p \right]< \epsilon \text{ avec }|X_o|< \delta.\]
  2. Let \(p\geq 2\), we say that a solution of \(\eqref{intelligent}\) is stable asymptoticaly in \(p^{th}\) moment if it is stable from \(p^{eme}\) moment \[\forall \quad X_o\in \mathcal{L}^2_{\mathcal{F}_{t_o}}\left(\left[-T,0\right],\mathbb{R}^n\right)\,,\] then we have \[\underset{T\longrightarrow\infty}{\lim}E\left[ \underset{t> T}{\text{Sup}}|X\left(t\right)|^p \right]=0\,.\]

2.2. Numerical schemes of SDEs

In this section, we present two numerical schemes adapted to SDEs, firtly the Euler-Maruyama scheme and secondly the non-standard Euler-Maruyama scheme.

Definition 4.(Non-standard finite difference schemes [8, 9]) Let consider an ordinary differential equation whose general form is given by:

\begin{equation}\tag{2} \dfrac{dX}{dt}=f\left( X(t)\right) , \label{equa41} \end{equation}
with \(X \in \mathbb{R}^{n}, f\in C^{2}(\mathbb{R}^{m},\mathbb{R}^{m})\). Let \(I=[0,T]\) with \(T \in\mathbb{R}^+\) and \(\Delta t \in \mathbb{R} \forall \Delta t > 0\) for \(k \in \mathbb{N} \), we note \(t_k\) the discrete time defined as \(t_k=k\Delta t \) or \(\Delta t = \dfrac{t_k}{k}\). A general numerical scheme of the non-standard type with one step \(\Delta t\) which approach the general solution of a system of the form \eqref{equa41} has the form: \begin{equation*} X_{k+1}=\phi({\Delta t })(X_k)=\phi({\Delta t ,X_k})\,, \end{equation*} when \(\phi({\Delta t })\) is \(C^2(\mathbb{R}^{m},\mathbb{R}^{m})\), \(X_k\) initial condition and \eqref{equa41} can be written as: \begin{equation*} \dfrac{dY}{dX} \simeq \dfrac{X_{k+1}-X_k}{\phi({\Delta t })}=f(X_k,X_{k+1},\Delta t) \label{equa11}\,, \end{equation*} when
\begin{equation}\tag{3} X_{k+1}=X_{k}+\phi({\Delta t }) f(X_k,X_{k+1},\Delta t) \,, \end{equation}
with \[ \phi({\Delta t })=\Delta t + 0(\Delta t ^2 )\text{ and } \Delta t \longrightarrow 0\,. \]

Among the forms of the function \( \phi({\Delta t } )\) the most used form of the function \( \phi \) is of the form [1]6}: \begin{equation*} \phi({\Delta t })=\dfrac{1-e^{\lambda \Delta t}}{\lambda} \quad \forall \quad \lambda \in \mathbb{R}\,. \label{equab} \end{equation*} Mickens in [11, 12, 13, 14] states five rules to be respected in order to build a good non-standard finite difference scheme. An advantage of using this scheme is that issues related to consistency, stability and convergence do not appear. Let's state the convergence of this scheme.

Definition 5.(Convergence of the Non-standard scheme [14]) A numerical scheme converges if the numerical solution \(X_k\) satisfies \begin{equation*} \sup_{0 \leq t_k \leq T} \|X_k-x( t_x ) \|\longrightarrow 0 , \Delta t \longrightarrow 0 \text{ et } x_{ 0} \longrightarrow x(t_0)\,. \end{equation*} It is of order \(p\) if \begin{equation*} \sup_{0 \leq t_k \leq T} \|X_{x}-x( t_x ) \|_{\infty}= 0 , (\Delta t)^p+0+\left( \|X_{x}-x(\Delta t ) \|\right)\,, \end{equation*} when \(\Delta t \longrightarrow 0 \text{ and } x_{0} \longrightarrow x(t_0)\).

Let us consider another improving definition of Mickens on the non-standard finite difference scheme according to Lubuma and Anguelov:

Definition 6.(DFNS according to Lubuma and Anguelov [15, 16]) A general one-step numerical scheme that approximates the solution of \eqref{equa41} is said to be a non-standard finite difference scheme if at least one of the following conditions is satisfied:

  1. \(\displaystyle {\frac{d}{dx}X(t_k)\approx \frac{X_{k+1}-X_k}{\phi{(\Delta t) }}} \) with \(\phi({\Delta t }) = \Delta t + 0(\Delta t )^2\) is a positive function.
  2. The nonlocal approximation of \(f(t_k,X(t_k))\) is \(\phi_{\Delta t }(f,X_k)=\overline{\phi}_{\Delta t }(f,X_k,X_{k+1})\).

Definition 7.(Euler-Maruyama scheme [5, 17]) Let \(\{X_{t}\}\) be the diffusion solution of the SDE \eqref{intelligent}. Let us consider the interval \([0,T]\) and a regular subdivision: \[t_{0}=0< t_{1}< t_{2}< t_{0}< \cdots< t_{k}=T \quad \text{with step}\quad \Delta t=\frac{T}{N}=\frac{T}{k}.\] The Euler-Maruyama scheme of \(\eqref{intelligent}\) is defined as:

\begin{equation}\tag{4} \left\lbrace\begin{array}{ll} X^{EM}_{k+1}=X_{k}+b(t_{k},X_{k})(t_{k+1}-t_{k})+\sigma(t_{k},X_{k})(B_{k+1}-B_{k}), \\ {X(0)}=X_{0}. \end{array}\right.\label{snum2} \end{equation}
Let us now state the definition of the non-standard Euler-Maruyama scheme based on the definitions of the non-standard finite difference scheme and the Euler-Maruyama scheme discussed earlier.

Definition 8.(Non-standard Euler-Maruyama scheme [14,18]) Considering the non-standard scheme definition rules, we define the Non-standard Euler-Maruyama scheme (EMNS) applied to \(\eqref{intelligent}\) and \eqref{snum2} which is given by:

\begin{equation}\tag{5} X^{EMNS}_{k+1}=X_k + b(X_k)\phi(\Delta t) + \sigma (X_k) \Delta B_k\,, \label{equa417} \end{equation}
with \(\phi(\Delta t) = \Delta t + C(\Delta t)\), a positive function of \( \Delta t \) and \(\Delta B_k = B_{k+1}-B_k .\)

In the following subsection, we present some elements of the construction of this scheme, including convergence.
2.2.1. Non-standard Euler-Maruyama scheme convergence
Consider the following three assumptions of convergence for non-standard Euler-Maruyama scheme:

Theorem 2.(Scheme convergence assumptions [19])

  1. [\(H_1\)] For the initial condition, we assume that it is chosen independently of the Brownian motion \(B_t\) of the Eq. \eqref{equa417}.
  2. [\(H_2\)] The local Liptschitz condition i.e. \(\forall \;R> 0 ~\exists L_R\) depends on \(R\) such us \begin{equation*} \left| b(x)-b(y)\right| \vee \left| \sigma(x)-\sigma(y)\right| \leq L_R \left| x-y\right| \quad x,y \in \mathbb{R}^n \end{equation*} with \(\left| x\right|\vee \left| y\right| \leq R\).
  3. [\(H_3\)] Linear growth condition \(~\exists \;k> 0\) such that \begin{equation*} \left| b(x)\right| \vee \left| \sigma(x)\right| \leq k(1+\left| x\right| )\quad \forall x \in \mathbb{R}^n\,. \end{equation*}

Let us consider another result useful concerning the convergence of the non-standard Euler-Maruyama scheme.

Theorem 3.(Strong convergence of the NSEMS [18]) Under assumptions \(H_1\), \(H_2\) and \(H_3\), for any \(p \geq 2\) there exists a constant \(c_1\) depending only on \(\delta t\) and \(p\) such that the exact solution of the approximation given by the non-standard Euler-Maruyama scheme of \eqref{equa417} have the following property: \begin{equation*} E\left[ \sup_{0\leq t \leq T} \left| Y_t\right|^p\right] \vee E\left[ \sup_{0\leq t \leq T} \left| {X}^{EMNS}(t)\right|^p\right] \leq c_1(\Delta t, p)\,, \end{equation*} the solution obtained with the non-standard Euler-Maruyama scheme scheme of \eqref{equa417} is strongly convergent.

The remaining paper is organized as follows. In S3, non-standard Euler-Maruyama asymptotic stability in mean and mean-square for Vasicek and Geometric Brownian motion is carried out and classical proof will be given for each case. Finally, in S4, numerical investigations and residual calculations are provided and discussed.

3. Non-standard Euler-Maruyama stabilities

In this section we will consider two models of SDEs; the first SDE model, of Vasicek, involves white noise of an additive nature, while the second model, the geometric Brownian motion, is of multiplicative type. Let consider the first, the Vasicek model.

3.1. Vasicek model

Let's consider the following SDE representing the Vasicek model [1]:
\begin{equation}\tag{6} \begin{cases} dX_t = (\theta_1 - \theta_2X_t)dt + \theta_3dB_t, \\ X(0) = X_0 ;\quad \text{with} \quad \theta_1, \theta_2 \in \mathbb{R}^* \quad \text{et} \quad \theta_3 \in \mathbb{R}^*_+. \end{cases} \label{eq(1)} \end{equation}
The analytical solution of \eqref{eq(1)} model is:
\begin{equation}\tag{7} X_t = \frac{\theta_1}{\theta_2} + \left( X_0 - \frac{\theta_1}{\theta_2}\right) e^{-\theta_2t} + \theta_3 \int_ {0}^{+\infty} e^{-\theta_2(t-u)}dB_u \,. \label{equa(2)} \end{equation}
Considering the solution of \eqref{equa(2)}, the mean and the mean-square give respectively: \begin{equation*} E[\left| X_t\right| ] = \frac{\theta_1}{\theta_2}\,, \end{equation*} and \begin{equation*} E(\left| X_t\right|^2) = \frac{\theta^2_3}{2\theta_2}\,. \end{equation*} Which means that the stochastic process \[X_t \simeq \mathcal{N}\left( \frac{\theta_1}{\theta_2},\frac{\theta^2_3}{2\theta_2}\right) \,.\] By using some properties of Brownian motion, the solution of the model \eqref{equa(2)} can be written as follows:
\begin{equation}\tag{8} X_t = \frac{\theta_1}{\theta_2} + \frac{\theta_3 e^{-2\theta_2 t}}{\sqrt{2\theta_2}} B(e^{2\theta_2 t})\,. \label{equa(5)} \end{equation}
Now, we present some numerical stabilities conditions for the system \eqref{eq(1)} of non-standard scheme and the proofs of these based on the approach described in [4] and used in [1]. The Euler-Maruyama scheme associated with \eqref{eq(1)} is: \begin{equation*} \begin{aligned} X^{EM}_{k +1} & = X_k + (\theta_1 - \theta_2 X_k)\Delta t_k + \theta_3 \Delta B_k. \end{aligned} \end{equation*} After calculation, we get
\begin{equation}\tag{9} X^{EM}_{k +1} =\theta_1 \Delta t + (1 - \Delta t \theta_2)X_k + \theta_3 \sqrt{\Delta t}Z_k. \label{eq(2)} \end{equation}

Remark 2.We assume that the variable follows the normal distribution of expectation \(0\) and mean-square \( 1\). i.e., \[Z_k \simeq \mathcal{N}(0,1)\,.\]

The non-standard Euler-Maruyama scheme associated to \eqref{eq(2)} is \begin{equation*} \begin{aligned} X^{EMNS}_{k +1} &= X_k + (\theta_1 - \theta_2X_k)\phi(\Delta t) + \theta_3 \Delta B_k = X_k + \theta_1\phi( \Delta t) - \theta_2\phi(\Delta t)X_k + \theta_3 \Delta B_k\,. \end{aligned} \end{equation*} After calculation, we obtain
\begin{equation}\label{eq(3)}\tag{10} X^{EMNS}_{k +1} = \theta_1\phi( \Delta t) + (1 - \theta_2\phi(\Delta t))X_k + \theta_3 \sqrt{\Delta t} Z_k\,. \end{equation}

Remark 3. The positive function, \(\phi(\Delta t)\) is calculated using the procedure described by Mickens in [20]. In the framework of this stochastic differential equation of Vasicek, we consider only the part containing the drift \(b\) for the computed, i.e., \( b(t,X_t) = (\theta_1 - \theta_2X_t)dt\,, \) and the part containing the diffusion \(\sigma\), i.e., \( \sigma (t,X_t)= \theta_3dB_t \) remains unchanged.

After determination of the function \( \phi{(\Delta t)} \), while respecting the rules described in [20], the function \( \phi(\Delta t)\) gives:
\begin{equation}\tag{11} \phi(\Delta t) = \frac{1 - e^{-\theta_2\Delta t}}{\theta_2} \,. \label{eq(4)} \end{equation}
Carrying \eqref{eq(4)} into \eqref{eq(3)}, we have \begin{equation*} \begin{aligned} X^{EMNS}_{k + 1} &= X_k + \left( \theta_1 - \theta_2X_k\right) \left( \frac{1 - e^{-\theta_2\Delta t}}{\theta_2}\right) \quad + \quad \theta_3 \Delta B_k\\ & = \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) + \left( 1 - \theta_2\left( \frac{1 - e^{-\theta_2\Delta t}}{\theta_2}\right) \right) X_k + \theta_3 \Delta B_k\\ & = \frac{\theta_1}{\theta_2}(1 - e^{-\theta_2\Delta t}) + e^{-\theta_2\Delta t}X_k + \theta_3 \Delta B_k\,. \end{aligned} \end{equation*} The non-standard Euler-Maruyama scheme of the Vasicek model after insertion of the function \(\phi(\Delta t)\) gives: \begin{equation*} X^{EMNS}_{k+1} = \frac{\theta_1}{\theta_2}(1 - e^{-\theta_2\Delta t}) + e^{-\theta_2\Delta t} X_k + \theta_3 \Delta B_k\,. \label{emns1} \end{equation*} We obtain after manipulations that
\begin{equation}\tag{12} X^{EMNS}_{k+1} = \frac{\theta_1}{\theta_2}(1 - e^{-\theta_2\Delta t}) + e^{-\theta_2\Delta t} X_k + \theta_3 \sqrt{\Delta t} Z_k\,. \end{equation}
Let us analyse the numerical mean and mean-square stabilities of the Non-standard Euler-Maruyama scheme of the Vasicek stochastic differential equation model.
3.1.1. Mean stability of non-standard Euler-Maruyama scheme

Theorem 4.[Mean stability of non-standard Euler-Maruyama scheme] The non-standard Euler-Maruyama scheme of the model \eqref{eq(1)} is asymptotically stable in mean if \begin{equation*} E\left( X^{EMNS}_{k+1}\right) = \left| e^{-\theta_2 \Delta t}\right| ^{k+1}E\left( X_0\right) + \left| \frac{3\theta_1}{2\theta_2}(1-\frac{1}{3}e^{-\theta_2 \Delta t})\right| \left( 1+\sum_{i=1}^{k+1}\left| e^{-\theta_2 \Delta t}\right| ^i\right) \label{eq(5)}\,, \end{equation*} with \( \left| e^{-\theta_2 \Delta t}\right| < 1 \) and \( \lim_{\Delta t \to 0} \left( \lim_{x \to \infty}E\left( X^{EMNS}_{k+1}\right) \right) = \frac{\theta_1}{\theta_2}. \)

Proof. Let us begin by evaluating the mean of (6), we have \[\begin{aligned} E\left( X^{EMNS}_{k +1}\right) & = E\left( \frac{\theta_1}{\theta_2}\left( 3 - e^{-\theta_2\Delta t}\right) + e^{-\theta_2\Delta} X_k + \theta_3 \Delta B_k\right) \\ & = E\left( \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right) + E\left( e^{-\theta_2\Delta t} X_k\right) + E\left( \theta_3 \Delta B_k\right) \\ & = E\left( \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right) + \left| e^{-\theta_2 \Delta t}\right| E\left( X_k\right) + \theta_3 \sqrt{\Delta t}E\left( Z_k\right)\\ & = E\left( \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right) + \left| e^{-\theta_2 \Delta t}\right| E\left( X_k\right) + 0 \text{ car } E\left( Z_k\right) = 0\\ & = \left| \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right| + \left| e^{-\theta_2 \Delta t}\right| E(X_k)\\ & = \left| \frac{3\theta_1}{2\theta_2}\left( 1 -\dfrac{1}{3} e^{-\theta_2\Delta t}\right) \right| + \left| e^{-\theta_2 \Delta t}\right| \left\lbrace \left| \frac{3\theta_1}{2\theta_2}(1 - \dfrac{1}{3}e^{-\theta_2\Delta t})\right| + \left| e^{-\theta_2 \Delta t}\right| E\left( X_{k-1}\right) \right\rbrace\end{aligned}\] \[\begin{aligned} & = \left| \frac{3\theta_1}{\theta_2}\left( 1-\frac{1}{3}e^{-\theta_2 \Delta t}\right)\right| \left( 1+\left| e^{-\theta_2 \Delta t}\right| + \left| e^{-\theta_2 \Delta t}\right| ^2 + \left| e^{-\theta_2 \Delta t}\right| ^3 +\cdots+ \left| e^{-\theta_2 \Delta t}\right| ^{k+1} \right) E\left( X_0\right) \\ & = \left| \frac{3\theta_1}{2\theta_2}\left( 1-\dfrac{1}{3}e^{-\theta_2 \Delta t}\right) \right| \left(1+ \sum_{i=1}^{k+1}\left| e^{-\theta_2 \Delta t}\right| ^i\right) + \left| e^{-\theta_2 \Delta t}\right| ^{k+1}E\left( X_0\right)\,. \end{aligned}\] Continuing with the iterations, we get \begin{equation*} E\left( X^{EMNS}_{k+1}\right) = \left| e^{-\theta_2 \Delta t}\right| ^{k+1}E\left( X_0\right) + \left| \frac{3\theta_1}{2\theta_2}(1-\frac{1}{3}e^{-\theta_2 \Delta t})\right|\left( 1+ \sum_{i=1}^{k+1}\left| e^{-\theta_2 \Delta t}\right| ^i\right)\,, \end{equation*} Let's assume that \[ I_1 = \left| e^{-\theta_2 \Delta t}\right| ^{k+1} E(X_0)\,, \] which represents a geometric sequence, it converges, for \(\left| e^{-\theta_2 \Delta t}\right| < 1\) and as \(\Delta t \to 0\) with \(\theta_2 > 0\) we have by passing to the limit that

\begin{equation}\tag{13} \label{i} \lim_{\Delta t \to 0} \left( \lim_{k \to \infty} \left| e^{-\theta_2 \Delta t}\right| ^{k + 1} E\left( X_0\right) \right) = 0\,,\end{equation}
and just as in the case of \(I_2 = \left| \frac{3\theta_1}{2\theta_2}(1 - \dfrac{1}{3}e^{-\theta_2\Delta t})\right| \displaystyle\left( 1+\sum_{i = 1}^{k + 1}|e^{-\theta_2 \Delta t}|^i\right) \) as \(\Delta t \to 0\) and \(\theta_2 > 0\) with \(\left| e^{-\theta_2 \Delta t}\right| < 1\), then we have that
\begin{equation}\tag{14}\label{ii} \lim_{\Delta t \to 0}\left( \lim_{k \to \infty} \left| \frac{3\theta_1}{2\theta_2}(1 - \dfrac{1}{3}e^{-\theta_2\Delta t})\right| \displaystyle\left( 1+\sum_{i = 1}^{k + 1}|e^{-\theta_2 \Delta t}|^i \right) \right) = \frac{\theta_1}{\theta_2}.\end{equation}
From (\ref{i}) and (\ref{ii}), we have \( \forall\;\theta_2 > 0 , |e^{-\theta_2 \Delta t}| < 1 \), \[ \lim_{\Delta t \to 0}\left( \lim_{k \to \infty}E\left( X_{k+1}\right) \right) = \frac{\theta_1}{\theta_2}. \]

3.2.2. Mean-square stability of non-standard Euler-Maruyama scheme

Theorem 5.[Mean-square stability of non-standard Euler-Maruyama scheme] The Non-standard Euler-Maruyama scheme of the model \eqref{eq(1)} is mean square asymptotically stable if \begin{equation*} E\left( X^{EMNS}_{k+1}\right) = \left| e^{-\theta_2 \Delta t}\right|^{2(k+1)}E\left( \left| X_0\right| ^2\right) + \left( \left|\frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right| ^2 + \frac{\left| \theta_3\sqrt{\Delta t}\right| ^2}{\left|2\Delta t\theta_2\right|}\right) \sum_{i=1}^{k+1}\left|1- e^{-\theta_2 \Delta t}\right| ^{2i}\,, \end{equation*} with \( \left| e^{-\theta_2 \Delta t}\right| < 1\,, \; \theta_2>0\,, \) then \( \lim_{\Delta t \to 0} \left( \lim_{x \to \infty}E\left( X^{EMNS}_{k+1}\right) \right) = \frac{{\theta_3}^2}{2\theta_2}\,. \)

Proof. First, let's calculate the root mean square of the EMNS scheme of the model \eqref{eq(1)}, we have \[\begin{aligned} E\left( \left| X^{EMNS}_{k+1}\right| ^2\right) &= E\left( \left|\frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) + \left( e^{-\theta_2\Delta t}\right) X_k + \theta_3\sqrt{\Delta t} Z_k\right| ^2\right) \\ & = E\left( \left|\frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right) \right| ^2 + E\left( \left|e^{-\theta_2\Delta t}X_k\right) \right| ^2 + E\left( \left|\theta_3\sqrt{\Delta t} Z_k\right| ^2\right) \\ & = \left| \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right| ^2 + \left| \theta_3\sqrt{\Delta t}\right| ^2 + \left| e^{-\theta_2\Delta t}\right| ^2E\left( \left|X_k\right| ^2\right) \\ & = \left| \frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right| ^2 + \frac{\left| \theta_3\sqrt{\Delta t}\right| ^2}{\left|\Delta t\theta_2\right|} + \left| e^{-\theta_2\Delta t}\right| ^2 \left\lbrace \left| \frac{\theta_1}{\theta_2}\left( 1-e^{-\theta_2 \Delta t}\right) \right| ^2 + \frac{\left| \theta_3\sqrt{\Delta t}\right| ^2}{\left|\Delta t\theta_2\right|}\right.\end{aligned}\] \[\begin{aligned} &~~~~~ + \left.\left| 1-e^{-\theta_2 \Delta t}\right| ^2E\left( \left|X_{k-1}\right| ^2\right) \right\rbrace\\ & ~~\vdots\\ & = \left| e^{-\theta_2 \Delta t}\right|^{2(k+1)}E\left( \left| X_0\right| ^2\right) + \left( \left|\frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right| ^2 + \frac{\left| \theta_3\sqrt{\Delta t}\right| ^2}{\left|2\Delta t\theta_2\right|}\right) \sum_{i=1}^{k+1}\left|1- e^{-\theta_2 \Delta t}\right| ^{2i} \end{aligned}\,.\] By recurrence, we obtain that \begin{equation*} E(|X^{EMNS}_{k+1}|^2) = \left| e^{-\theta_2 \Delta t}\right|^{2(k+1)}E\left( \left| X_0\right| ^2\right) + \left( \left|\frac{\theta_1}{\theta_2}\left( 1 - e^{-\theta_2\Delta t}\right) \right| ^2 + \frac{\left| \theta_3\sqrt{\Delta t}\right| ^2}{\left|2\Delta t\theta_2\right|}\right) \sum_{i=1}^{k+1}\left|1- e^{-\theta_2 \Delta t}\right| ^{2i} \,.\label{eq(6)} \end{equation*} By going to the limit for \( \Delta t \to 0 \) and \( k \to +\infty \), with \[ \left|1- e^{-\theta_2 \Delta t}\right| < 1 \quad \text{and} \quad \theta_2>0, \] we obtain that \begin{equation*} \lim_{\Delta t \to 0} \left( \lim_{k \to \infty}E\left( X^{EMNS}_{k+1}\right) \right) = \frac{{\theta_3}^2}{2\theta_2}\,. \end{equation*}

Theorem 6. Under the assumptions, the non-standard Euler-Maruyama scheme in mean and mean-square is asymptotically stable if we have respectively \( \left|e^{-\theta_2 \Delta t}\right| < 1\) and \( \left|1- e^{-\theta_2 \Delta t}\right| < 1\) with \(\theta_2 >0.\)

Let's consider the second model, the Geometric Brownian motion.

3.2. Geometric Brownian motion

Consider the following stochastic differential equation [1], describing geometric Brownian motion
\begin{equation}\tag{15} \left\lbrace \begin{array}{ll} dX_t = \theta_1 X_t dt + \theta_2 X_t dB_t,\\ X(0) = X_0, \end{array} \right. \quad \theta_1, \theta_2 \in \mathbb{R}. \label{eq(7)} \end{equation}
The explicit solution of this model is
\begin{equation}\tag{16} X_t = X_0 e^{ \left\lbrace (\theta_1-\frac{1}{2}\theta_2^2)t + \theta_2B_t\right\rbrace }\label{6b} \,.\end{equation}
The the right hand variable, follows a normal distribution, it can also be written in the form: \begin{equation*} X_t = X_s e^{\left\lbrace \left( \theta_1-\frac{1}{2}\theta_2^2\right) t + \theta_2 \left( B_t -B_s\right) \right\rbrace }\label{6c} \,.\end{equation*} The mean and the mean-square of \eqref{6b} give respectively:
\begin{equation}\tag{17} E(X_t) = X_0e^{\theta_1 t},\quad E(X_t^2) = X^2_0 e^{\left( 2\theta_1+\theta_2^2\right)t}\,. \end{equation}

Remark 4.([1]) It should be noted that

  1. For mean, if \(t\to \infty\) and \(\theta_1 < 0\), we have \begin{equation*} \lim_{t \to \infty}E(X_t) = \lim_{k \to \infty} X_0 e^{\theta_1 t} = 0 \,.\end{equation*}
  2. For mean-square, if \(\left( 2\theta_1+\theta_2^2\right)< 0\) and \(t \to \infty\) i.e., \begin{equation*} \lim_{t \to \infty}E(X_t^2) = 0 \text{ with } \left( 2\theta_1+\theta_2^2\right)< 0. \end{equation*}

The Euler-Maruyama scheme of the geometric Brownian motion model [1] associated with \eqref{eq(7)} is
\begin{equation}\tag{18} X^{EM}_{k+1} = X_k(1+\theta_1\Delta t) + \theta_2 X_k \Delta B_k\,, \label{eq(8)} \end{equation}
with \(\Delta B_k = \sqrt{\Delta t} Z_k,\) and \(Z_k \simeq \mathcal{N}(0,1).\) The non-standard Euler-Maruyama scheme associated to \eqref{eq(8)} is \begin{equation*} X^{EMNS}_{k+1} = X_k + \theta_1 \phi (\Delta t) X_k + \theta_2 X_k \Delta B_k \,.\end{equation*} After some calculations, we get
\begin{equation}\tag{19} X^{EMNS}_{k+1} = X_k \left( 1+\theta_1 \phi(\Delta t)\right) + \theta_2 X_k \sqrt{\Delta t} Z_k \label{eq(9)}\,. \end{equation}

Remark 5. Given, the nature of the function \(\phi(\Delta t)\), a positive function. In the framework of the stochastic differential equation of geometrical Brownian motion that we treat, to calculate the function \( \phi(\Delta t)\), we consider only the part containing the drift \( b(t,X_t) = \theta_1X_tdt\) and the part containing the diffusion \( \sigma (t,X_t)= \theta_2X_tdB_t\) remains unchanged.

Calculating \(\phi(\Delta t)\) without diffusion according to [20], we have
\begin{equation}\tag{20} \phi(\Delta t) = \frac{e^{\theta_1\Delta t}-1}{\theta_1}\,. \label{eq(10)} \end{equation}
Carrying \eqref{eq(10)} in \eqref{eq(9)}, the Non-standard Euler-Maruyama scheme gives, \begin{align*} X^{EMNS}_{k+1} =& X_k \left( 1 +\theta_1\left( \frac{e^{\theta_1\Delta t}-1}{\theta_1}\right) \right) +\theta_2X_k \sqrt{\Delta t}Z_k. \end{align*} This gives us,
\begin{equation}\tag{21} X^{EMNS}_{k+1} = e^{\theta_1\Delta t}X_k+\theta_2X_k\sqrt{\Delta t}Z_k\,.\label{eq(11)} \end{equation}
Let us evaluate the stabilities of the expression \eqref{eq(11)}. In the following paragraph, we state and prove the mean and mean-square stability conditions of the Non-standard Euler-Maruyama scheme. Let us start with the mean stability of the scheme.
3.2.1. Mean stability of the non-standard Euler-Maruyama scheme

Theorem 7.(Mean stability of the non-standard Euler-Maruyama scheme) The non-standard Euler-Maruyama scheme of the model \eqref{eq(7)} is asymptotically stable in mean if \begin{equation*} E\left( X^{EMNS}_{k+1}\right) = \left( e^{\theta_1\Delta t}\right)^{k+1} E\left( X_0\right)\,, \label{eq(12)} \end{equation*} with \( \left| e^{\theta_1 \Delta t}\right| < 1 ;\quad \theta_1< 0 \) and \begin{equation*} \lim_{\Delta t \to 0} \left( \lim_{x \to \infty}E\left( X^{EMNS}_{k+1}\right) \right) = 0\,. \end{equation*}

Proof. Let's calculate the mean of the non-standard Euler-Maruyama scheme. \[\begin{aligned} E\left( X^{EMNS}_{k+1}\right) &= E\left[ \left( e^{\theta_1\Delta t}+\theta_2 \sqrt{\Delta t}Z_k\right) X_k\right] \\ &= E\left( e^{\theta_1\Delta t}X_k\right) + E\left( \theta_2 \sqrt{\Delta t}X_k\right)E\left( Z_k\right) \quad \text{avec} \quad E\left( Z_k\right)=0\\ &= e^{\theta_1\Delta t} E\left( X_k\right)\\ &= e^{\theta_1\Delta t}\left( e^{\theta_1\Delta t}\left( e^{\theta_1\Delta t}\left( e^{\theta_1\Delta t}\left( \cdots e^{\theta_1\Delta t}E\left( X_0\right)\right) \right) \right) \right)\,. \end{aligned}\] Continuing with the iterations, we get $$\tag{22} E\left( X^{EMNS}_{k+1}\right) = \left( e^{\theta_{1}\Delta t}\right)^{k+1} E \left( X_{0}\right). $$ If we assume that \[ \left| e^{\theta_1 \Delta t}\right| < 1 \quad \text{with} \quad \theta_1 < 0 , \] passing to the limit for \( \Delta t \to 0 \) and \( k \to +\infty \) we get the sought result i.e., \begin{equation*} \lim_{\Delta t \to 0} \left( \lim_{x \rightarrow \infty}E\left( X^{EMNS}_{k+1}\right) \right) = 0\,. \end{equation*} %

3.2.2. Mean-square stability of the non-standard Euler-Maruyama scheme

Theorem 8.(Mean-square stability of the EMNS scheme) The non-standard Euler-Maruyama scheme of the model \eqref{eq(7)} is asymptotically stable in mean-square if \begin{equation*} E\left( \left| X^{EMNS}_{k+1}\right|^2 \right) = \left( \left( e^{\theta_1\Delta t}\right) +\left( \left| \theta_2 \sqrt{\Delta t}\right| \right)\right)^{2(k+1)}\left( E\left| X_{0}\right| ^2\right)\,, \end{equation*} with \begin{equation*} \left| e^{\theta_1\Delta t}+ \theta_2 \sqrt{\Delta t}\right| < 1\,, \end{equation*} and \begin{equation*} \lim_{\Delta t \to 0} \left( \lim_{x \to \infty}E\left( X^{EMNS}_{k+1}\right) \right) =0\,. \end{equation*}

Proof. Let us evaluate the mean-square of the Non-standard Euler-Maruyama scheme \eqref{eq(11)} of the Eq. \eqref{eq(7)} \(\begin{aligned} E\left( \left| X^{EMNS}_{k+1}\right|^2 \right) &= E\left( \left| \left( e^{\theta_1\Delta t} + \theta_2 \sqrt{\Delta t} Z_k\right) X_k\right| ^2\right) = E\left(\left| e^{\theta_1\Delta t} + \theta_2 \sqrt{\Delta t} Z_k\right|^2 \right) E\left(\left|X_k\right|^2\right)\\ &=\left( \left( e^{\theta_1\Delta t}\right)^2 + \left| \theta_2 \sqrt{\Delta t}\right|^2\right) E\left(\left| X_k\right|^2\right) = \left( \left( e^{\theta_1\Delta t}\right) + \left| \theta_2 \sqrt{\Delta t}\right|\right)^2 E\left(\left| X_k\right|^2\right) \\ &= \left( e^{\theta_1\Delta t} + \left( \theta_2 \sqrt{\Delta t} \right)^2 E\left(\left| X_k\right|^2\right)\right) \\ &=\left( e^{\theta_1\Delta t} + \theta_2 \sqrt{\Delta t} \right)^2 \left(\left( e^{\theta_1\Delta t} + \theta_2 \sqrt{\Delta t} \right)^2 E\left(\left| X_{k-1}\right|^2\right)\right)\\ &~~\vdots\\ &= \left( \left( e^{\theta_1\Delta t}\right) +\left( \left| \theta_2 \sqrt{\Delta t}\right| \right)\right)^{2(k+1)}\left( E\left| X_{0}\right| ^2\right)\\ &=\left( e^{\theta_1\Delta t} + \theta_2 \sqrt{\Delta t} \right)^2 \left(\left( e^{\theta_1\Delta t} + \theta_2 \sqrt{\Delta t} \right)^2 E\left(\left| X_{k-1}\right|^2\right)\right)\\ &~~\vdots\\ &= \left( e^{\theta_1\Delta t} + \left| \theta_2 \sqrt{\Delta t}\right| \right)^{2(k+1)}\left( E\left| X_{0}\right| ^2\right)\,. \end{aligned}\) \noindent Continuing with the iterations, we get

\begin{equation}\tag{23} E\left( \left| X^{EMNS}_{k+1}\right|^2 \right) = \left( \left( e^{\theta_1\Delta t}\right) +\left( \left| \theta_2 \sqrt{\Delta t}\right| \right)\right)^{2(k+1)}\left( E\left| X_{0}\right| ^2\right)\,. \label{eq(13)} \end{equation}
If we assume that \[\left| e^{\theta_1\Delta t}+ \theta_2 \sqrt{\Delta t}\right| < 1 \] from the fact that the sequence is geometric, going to the limit with the fact that the quantity \[ E\left(\left| X^{EMNS}_{k+1}\right| \right) = 0 \,.\] For \( \Delta t \to 0 \) and \( k \to +\infty ,\) we find that \begin{equation*} \lim_{\Delta t \to 0} \left( \lim_{x \to \infty}E\left( X^{EMNS}_{k+1}\right) \right) =0. \end{equation*}

Theorem 9. Under the assumptions, the non-standard Euler-Maruyama scheme in mean and mean-square is asymptotically stable if \[\left| e^{\theta_1 \Delta t}\right| < 1 \quad \text{with} \quad \theta_1 <0 ,\quad \theta_2 >0\] respectively \[\left| e^{\theta_1\Delta t}+ \theta_2 \sqrt{\Delta t}\right| < 1.\]

4. Numerical simulations of the models

This section presents some numerical simulations of Vasicek and geometric Brownian motion models. We treat the stability and instability cases and the increasing and decreasing cases. We will associate with each model the calculation of the residuals. Let's start with the simulation of the first model, the Vasicek's one.

4.1. Vasicek model simulation

In this subsection, we consider the Vasicek equation, we present the different simulations of this model.

Figure 1. Stability of the Vasicek model using the EMNS scheme.

Figure 2. Stability and instability of the Vasicek model using the EMNS scheme.

4.2. Geometric Brownian motion model simulation

In this subsection, we present the different numerical simulations of geometric Brownian motion.

Figure 3. Increasing case of geometric Brownian motion.

Figure 4. decreasing case of geometric Brownian motion.

4.3. Interpretations of the results

4.3.1. Vasicek's Model
The two figures in Figure 1 show that there is stability of the non-standard Euler-Maruyama scheme for the Vasicek model. Moreover, the errors are smaller, namely \(\text{Emnserr} = 0.0338\) resp. (\(\text{Emnserr} = 0.0354\)) compared to the errors obtained by using the Euler-Maruyama scheme \(\text{Emerr} = 0. 2280\) resp. (\(\text{Emerr} = 0.2286\)) of the same model for a \(\Delta t=0.004\) resp.( \(\Delta t=0.015\)). On the other hand the two other figures in Figure 2 show that there is stability resp.(instability) of the non-standard Euler-Maruyama scheme for the Vasicek model, because the errors are smaller namely \(\text{Emnserr} = 0. 0377\) resp. (\(\text{Emnserr} = 3.4794\)) compared to those obtained with the Euler-Maruyama scheme \(\text{Emerr} = 0.2298\) resp. (\(\text{Emerr} = 7.7772\)) of the same model for a \(\Delta t=0.015\). Although the same discretization step was considered, the instability is because the value of the parameter \( \theta_2 =-4\) i.e., negative, does not correspond to the right value for this parameter.
4.3.2. Geometric Brownian motion model
The two figures, i.e., Figure 3 and Figure 4 present the stability of the model of geometric Brownian motion; the first two figures in Figure 3 are elaborated with a positive \( \theta_1 \) that is why the figures express the growth while in the two figures of Figure 4 with a negative \( \theta_1 \) that is why the figures express the decrease. However, in both figures in Figure 3 the errors are smaller i.e. \(\text{Emnserr} = 0.2548\) respectively (\(\text{Emnserr} = 0. 0283 \) ) compared to the Euler-Maruyama scheme \(\text{Emerr} = 0.2611\) resp. (\(\text{Emerr} = 0.0303\)) of this model for a \(\Delta t=0.015\) resp. \((\Delta t=0.004 )\) for a \( \theta_1=1\). On the other hand, in the two other figures of Figure 4 the errors of the non-standard Euler-Maruyama scheme are smaller \(\text{Emnserr} = 0.0415\) resp. (\(\text{Emnserr} =0. 0046 \)) compared to the errors obtained with the Euler-Maruyama scheme \(\text{Emerr} =0.0423 \)) resp. (\( \text{Emerr} = 0.0054 \)) for a \( \Delta t=0.015\) resp.(\( \Delta t= 0.008\)) for a \( \theta_1=-1 \) resp.(\( \theta_1=-2 \)).

Remark 6. In line with the results obtained in the residual calculations for the two stochastic differential equation models treated, we find that the non-standard Euler-Maruyama scheme is better than the Euler-Maruyama scheme, as the latter improves on the Euler-Maruyama scheme by best approximating the exact solutions.

5. Conclusion

This paper presents the mean and mean-square stability of the non-standard Euler-Maruyama scheme using Vasicek and the geometric Brownian motion models. Both models are linear in dimension one; the Vasicek model is additive noise while the geometric Brownian motion is of multiplicative type; In these models we have established the conditions of numerical stabilities of the Non-standard Euler-Maruyama scheme, and a comparison has been made with the Euler-Maruyama scheme. These conditions are stated as theorems, and we prove these results using the classical approach. The results found were supported by numerical simulations and the determination of residuals. It should be noted that the Non-standard Euler-Maruyama scheme improves the Euler-Maruyama scheme. In future work, we will focus on a Non-standard scheme of non-linear and higher dimensional SDEs.

Acknowledgments :

The authors would like to thank the referee for his/her valuable comments that resulted in the present improved version of the article.

Conflicts of Interest:

''The author declares no conflict of interest.''

Data Availability:

All data required for this research is included within this paper.

Funding Information:

No funding is available for this research.

References

  1. Badibi, O.C., Ramadhani,I., Ndondo, M.A. & Kumwimba, S.D. (2023). Numerical stabilities of Vasicek and Geometric Brownian motion models.European Journal of Mathematical Analysis, 3, Article No. 8.[Google Scholor]
  2. Richard, J. P. (2002).Mathématiques pour les systémes dynamiques. Hermés Science Publication, Paris, Paris.
  3. Ahmad, R. (1988).Introduction to Stochastic Differential Equations. M Dekker, 1988. [Google Scholor]
  4. Saito, Y. (2008). Stability analysis of numerical methods for stochastic systems with additive noise.Review of Economics and Information Studies, 8(3-4), 119-123. [Google Scholor]
  5. Øksendal , B. (2003). Stochastic differential equations. InStochastic Differential Equations (pp. 65-84). Springer, Berlin, Heidelberg.[Google Scholor]
  6. Øksendal , B., & Sulem, A. (2005).Stochastic Control of Jump Diffusions(pp. 39-58). Springer Berlin Heidelberg.[Google Scholor]
  7. Lyapunov, A. M. (1892). A general task about the stability of motion.Ph. D. Thesis, University of Kazan, Tatarstan (Russia).[Google Scholor]
  8. Mickens, R. E., & Washington, T. M. (2012). A note on an NSFD scheme for a mathematical model of respiratory virus transmission.Journal of Difference Equations and Applications, 18(3), 525-529.[Google Scholor]
  9. Mickens, R. E. (2005).Advances in the Applications of Nonstandard Finite Difference Schemes. World Scientific.[Google Scholor]
  10. Mickens, R. E. (1994).Nonstandard Finite Difference Models of Differential Equations. world scientific.[Google Scholor]
  11. Mickens, R. E. (2005). Discrete models of differential equations: the roles of dynamic consistency and positivity. InDifference Equations and Discrete Dynamical Systems (pp. 51-70).[Google Scholor]
  12. Mickens, R. E. (2007). Numerical integration of population models satisfying conservation laws: NSFD methods.Journal of Biological Dynamics, 1(4), 427-436.[Google Scholor]
  13. Mickens, R. E. (2000).Applications of nonstandard finite difference schemes. World Scientific.[Google Scholor]
  14. Pierret, F. (2015). Modélisation de systèmes dynamiques déterministes, stochastiques ou discrets: application à l'astronomie et à la physique (Doctoral dissertation, Observatoire de Paris). [Google Scholor]
  15. Anguelov, R., & Lubuma, J. M. S. (2000).On the nonstandard finite difference method.Notices of the South African Mathematical Society, 31, 143-152.[Google Scholor]
  16. Anguelov, R., & Lubuma, J. M. S. (2001). Contributions to the mathematics of the nonstandard finite difference method and applications.Numerical Methods for Partial Differential Equations: An International Journal, 17(5), 518-543.[Google Scholor]
  17. Kloeden, P. E., & Platen, E. (1992). Stochastic differential equations. InNumerical Solution of Stochastic Differential Equations (pp. 103-160). Springer, Berlin, Heidelberg.[Google Scholor]
  18. Platen, E., & Bruti-Liberati, N. (2010). Numerical Solution of Stochastic Differential Equations with Jumps in Finance (Vol. 64). Springer Science & Business Media.[Google Scholor]
  19. Cresson, J., & Pierret, F. (2016). Non standard finite difference scheme preserving dynamical properties.Journal of Computational and Applied Mathematics, 303, 15-30.[Google Scholor]
  20. Mickens, R. E. (2007). Calculation of denominator functions for nonstandard finite difference schemes for differential equations satisfying a positivity condition. Numerical Methods for Partial Differential Equations: An International Journal, 23(3), 672-691. [Google Scholor]
]]>
On a class of \(p\)-valent functions with negative coefficients defined by opoola differential operator https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/on-a-class-of-p-valent-functions-with-negative-coefficients-defined-by-opoola-differential-operator/ Fri, 30 Dec 2022 17:59:35 +0000 https://old.pisrt.org/?p=6937
OMA-Vol. 6 (2022), Issue 2, pp. 35 - 50 Open Access Full-Text PDF
Bitrus Sambo and Timothy Oloyede Opoola
Abstract:Using opoola differential operator, we defined a subclass \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) of the class of multivalent or p-valent functions. Several properties of the class were studied, such as coefficient inequalities, hadamard product, radii of close-to-convex, star-likeness, convexity, extreme points, the integral mean inequalities for the fractional derivatives, and further growth and distortion theorem are given using fractional calculus techniques. ]]>

Open Journal of Mathematical Analysis

On a class of \(p\)-valent functions with negative coefficients defined by opoola differential operator

Bitrus Sambo\(^{1,*}\) and Timothy Oloyede Opoola\(^2\)
\(^1\) Department of Mathematics, Gombe State University, P.M.B. 127, Gombe, Nigeria.
\(^2\) Department of Mathematics, University of Ilorin, P.M.B. 1515, Ilorin, Nigeria.
Correspondence should be addressed to Bitrus Sambo at bitrussambo3@gmail.com

Abstract

Using opoola differential operator, we defined a subclass \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) of the class of multivalent or p-valent functions. Several properties of the class were studied, such as coefficient inequalities, hadamard product, radii of close-to-convex, star-likeness, convexity, extreme points, the integral mean inequalities for the fractional derivatives, and further growth and distortion theorem are given using fractional calculus techniques.

Keywords:

Multivalent functions; Opoola differential operator; Coefficient inequalities; Closure property.

1. Introduction

Let \(A\) denote the class of all functions, \(f(z)\) normalized by
\begin{align}\tag{1}\label{eq1.1} f(z)=z+\sum_{k=2}^{\infty}a_{k}z^{k}\,, \end{align}
which are analytic in the unit disc \(U=\lbrace z:|z|< 1\rbrace\).

Definition 3. [1] For \(f(z)\in A\) , Opoola introduced the following operator:

\begin{align}\tag{2}\label{eq1.2} D^{0}(\mu,\beta,t)f(z)&=f(z),\notag\\ D^{1}(\mu,\beta,t)f(z)&=zD_{t}f(z)=tzf'(z)-{z(\beta-\mu)t}+[1+(\beta-\mu-1)t]f(z),\notag\\ D^{n}(\mu,\beta,t)f(z)&=zD_{t} (D^{n-1}(\mu,\beta,t)f(z)),& n\in N. \end{align}
If \(f(z)\) is given by (\ref{eq1.1}), then from (2), we see that
\begin{align}\tag{3}\label{eq1.3} D^{n}(\mu,\beta,t)f(z)=z+\sum_{k=2}^{\infty}\left[1+\left(k+\beta-\mu-1\right) t\right] ^{n}a_kz^{k}\,, \end{align}
\( (0\leq\mu \leq \beta\), \(t\ge 0\) and \(n\in N_{0}=N \cup{0})\).

Remark 1.

  1. When \(\beta=\mu\), \(t=1\),\( D^{n}(\mu,\beta,t)f(z)= D^{n}f(z)\) by Salagean [2],
  2. When \(\beta=\mu\),\( D^{n}(\mu,\beta,t)f(z)= D_{\lambda}^{n}f(z)\) by Al-Oboudi [3].

Definition 1. Let \(A_p\) denote the class of functions of the form:

\begin{align}\tag{4}\label{eq1.4} f(z)=z^{p} + \sum_{k=p+1}^{\infty}a_kz^{k},\;\;\;\;\;(p=1,2,...)\,, \end{align}
which are analytic and multivalent in the open unit disc \(U=({z\in C:\left|z\right|< 1 }).\) We define the following differential operator for the functions \(f(z)\in A_p\)
\begin{align}\tag{5} D^{0}(\mu,\beta,t,p)f(z)& =f(z),\notag\\ D^{1}(\mu,\beta,t,p)f(z)& =zD_{t,p}f(z)=\frac{t}{p}zf'(z)-z^{p}(\beta-\mu)t+[1+(\beta-\mu-1)t]f(z),\notag\\ D^{n}(\mu,\beta,t,p)f(z)& =zD_{t,p} (D^{n-1}(\mu,\beta,t,p)f(z)), & n\in N.\label{eq1.5} \end{align}
If \(f(z)\) is given by (\ref{eq1.4}), then from (5), we see that
\begin{align}\tag{6}\label{eq1.6} D^{n}(\mu,\beta,t,p)f(z)=z^{p}+\sum_{k=p+1}^{\infty}\left[ 1+\left( \frac{k}{p}+\beta-\mu-1\right)t\right]^{n}a_kz^{k}\,, \end{align}
\( (0\leq\mu \leq \beta\), \(t\ge 0\) and \(n\in N_{0}=N \cup{0})\). Let \(T_{p}\) denote the subclass of \(A_{p}\) consisting of functions of the form
\begin{align}\tag{7}\label{eq1.7} f(z)=z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k} ,&& (a_k\ge 0,p=1,2,...)\,. \end{align}
If \(f(z)\) is given by Eq. (\ref{eq1.7}), then from Eq. (5), we get
\begin{align}\tag{8}\label{eq1.8} D^{n}(\mu,\beta,t,p)f(z)=z^{p}-\sum_{k=p+1}^{\infty}\left[ 1+\left( \frac{k}{p}+\beta-\mu-1\right)t\right]^{n}a_kz^{k}, \end{align}
\((n\in N_{0},a_k\ge 0,p=1,2,..,0\leq\mu \leq \beta, t\ge 0,n\in N_{0}=N \cup{0})\,.\)

Remark 2. When \(\beta=\mu\) in (\ref{eq1.8}),\(D^{n}(\mu,\beta,t,p)f(z)=D^{n}_{\delta,p}f(z)\) defined by Bulut in [4]. Now, from (\ref{eq1.8}), it follows that \(D^{n}(\mu,\beta,t,p)f(z)\) can be written in terms of Convolution as $$D^{n}(\mu,\beta,t,p)f(z)=(f\ast g)(z)\,,$$ where \(f(z)\) is as in (\ref{eq1.7}), while $$g(z)=z^{p}-\sum_{k=p+1}^{\infty}\left[ 1+\left( \frac{k}{p}+\beta-\mu-1\right)t\right]^{n}z^{k}\,.$$

Definition 2. A function \(f(z)\in T_{p}\) is in the class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) if

\begin{align}\tag{9}\label{eq1.9} \left|\frac{(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}}{\lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)}\right|< \delta, && (z\in U,n\in N_{0})\,, \end{align}
for some \(0\le\lambda< 1\), \(0\le\gamma< 1\),\(0< \alpha\le 1\),\(0< \delta< 1\), \(D^{n}(\mu,\beta,t,p)f(z)\) as defined in (\ref{eq1.8}).

Remark 3. When \(\mu=\beta\) in (\ref{eq1.9}),the class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) reduces to the class \(R^{n}_{p}(\alpha,\beta,\gamma,\mu)\) studied by Bulut in [4].

Definition 4. [5, 6] The fractional integral of order \(l\) is defined, for function \(f(z)\) by

\begin{align}\tag{10}\label{eq1.10} D_{z}^{-l}f(z)=\frac{1}{\Gamma (l)}\int_{0}^{z}\frac{f(t)}{(z-t)^{1-l}}d(t),&&(l>0)\,, \end{align}
where \(f(z)\) is an analytic function in a simply connected region of \(z\)-plane containing the origin,and the multiplicity of \((z-t)^{l-1}\) is removed by requiring \(log(z-t)\) to be real when \((z-t)>0\).

Definition 5. [5, 6] The fractional derivative of order \(l\) is defined, for function \(f(z)\) by

\begin{align}\tag{11}\label{eq1.11} D_{z}^{l}f(z)=\frac{1}{\Gamma (1-l)}\frac{d}{d(z)}\int_{0}^{z}\frac{f(t)}{(z-t)^{l}}d(t),& & (0\le l< 1)\,, \end{align}
where \(f\) is an analytic function in a simply connected region of \(z\)-plane containing the origin,and the multiplicity of \((z-t)^{-l}\) is removed by requiring \(log(z-t)\) to be real when \((z-t)>0\).

Definition 6. [5, 6] Under the hypothesis of Definition 4, the fractional derivative of order \(p+l\) is defined for functions \(f(z)\), by

\begin{align}\tag{12}\label{eq1.12} D_{z}^{p+l}f(z)=\frac{d^{p}}{d(z)^{p}}D_{z}^{l}f(z), & &(0\le l< 1,p\in N_{0})\,. \end{align}
It readily follows from (\ref{eq1.9}) and (\ref{eq1.10}) that
\begin{align}\tag{13}\label{eq1.13} D_{z}^{-l}z^{k}=\frac{\Gamma (k+1)}{\Gamma (k+l+1)}z^{k+l},& & (l>0,k\in N)\,, \end{align}
and
\begin{align}\tag{14}\label{eq1.14} D_{z}^{l}z^{k}=\frac{\Gamma (k+1)}{\Gamma (k-l+1)}z^{k-l}, & & (0\le l< 1,k\in N)\,. \end{align}

Lemma 1.[7] If \(f(z)\) and \(g(z)\) are analytic in \(U\) with \(f(z) \prec g(z)\), then for \(\sigma >0\) and \(z=re^{i\theta}\), \({(0 < r< 1)}\), then \[\int_{0}^{2\pi}\left|f(z)\right|^{\sigma}d\theta \le \int_{0}^{2\pi}\left|g(z)\right|^{\sigma}d\theta\,.\]

In this work, several properties of the class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) are studied, such as coefficient inequalities, hadamard product, radii of close-to-convex, star-likeness, convexity, extreme points, the integral mean inequalities for the fractional derivatives, and further growth and distortion theorem are given using fractional calculus techniques. For more research on classes of multivalent or p-valent functions, see [7, 8, 9, 10, 11, 12, 13,14, 15, 16, 17, 18]

2. Main results

Theorem 1. A function \(f(z)\in T_{p}\) is in the class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) if and only if

\begin{align}\tag{15}\label{eq1.15} \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)a_k \le \delta(\lambda p+\alpha-\gamma)\,, \end{align}
for some \(0\le\lambda< 1\), \(0\le\gamma< 1\),\(0< \alpha\le 1\),\(0< \delta< 1\). The result is sharp for the function \(f(z)\) given by \begin{align*}f(z)=z^{p}-\frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k},& & (k\ge p+1).\end{align*}

Proof Suppose that \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta) \), then we have from (\ref{eq1.9}) that $$\left|\frac{(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}}{\lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)}\right|< \delta\,.$$ By substitution, we have $$\left|\frac{pz^{p-1}-\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_kz^{k-1}-pz^{p-1}}{\lambda (pz^{p-1}-\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_kz^{k-1})+(\alpha-\gamma)}\right|< \delta,$$ $$\left|\frac{\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_kz^{k-1}}{\lambda (pz^{p-1}-\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_kz^{k-1})+(\alpha-\gamma)}\right|< \delta\,.$$ Since \(\Re z\le\left|z\right|,\) then $$\Re \left\lbrace\frac{\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_kz^{k-1}}{\lambda (pz^{p-1}-\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_kz^{k-1})+(\alpha-\gamma)}\right\rbrace < \delta\,. $$ If we choose \(z\) real and let \(z\rightarrow 1^{-}\), then we get \begin{align*}& \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_k\le \delta\left[ \lambda (p-\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_k)+(\alpha-\gamma)\right],\\ & \Rightarrow \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_k \le \delta \lambda p -\delta \lambda\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_k+\delta(\alpha-\gamma),\\ & \Rightarrow \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_k+\delta \lambda\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_k\le\delta(\lambda p+\alpha-\gamma),\\ & \Rightarrow \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta \lambda)a_k \le\delta(\lambda p+\alpha-\gamma).\end{align*} Conversely, suppose that the inequality (\ref{eq1.15}) holds true and that \(z\in \partial U:\left\lbrace z\in C:\left| z\right|=1\right\rbrace\) and suppose that \begin{align*}& \left|(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}\right| - \delta\left| \lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)\right|\\ & \le\left|(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}- \delta(\lambda(D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma))\right|\\ & =\left|-\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_{k}z^{k-1} -\delta\lambda pz^{p-1} \right.\\&+\left. \delta\lambda\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_{k}z^{k-1}-\delta(\alpha-\gamma)\right|\\ & =\left|\sum_{k=p+1}^{\infty}(\delta\lambda-1)k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_{k}z^{k-1}-\delta(\lambda pz^{p-1}+\alpha-\gamma)\right|\\ & \le\sum_{k=p+1}^{\infty}(\delta\lambda-1)k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}a_{k}-\delta(\lambda p+\alpha-\gamma)\\ & \le\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(\delta\lambda+1)a_{k}-\delta(\lambda p+\alpha-\gamma)\le 0.\end{align*} Since by maximum modulus theorem ,that the maximum modulus of an analytic function cannot be attained inside the domain but on the boundary, implies $$\left|(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}\right|- \delta\left| \lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)\right|< 0\,,$$ i.e.,$$ \left|(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}\right|< \delta\left| \lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)\right|\,.$$ So, $$\frac{\left|(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}\right|}{\left|\lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)\right|}< \delta\,,$$ implies $$ \left|\frac{(D^{n}(\mu,\beta,t,p)f(z))'-pz^{p-1}}{\lambda (D^{n}(\mu,\beta,t,p)f(z))'+(\alpha-\gamma)}\right|< \delta.$$ Hence, we have that \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta).\)

Corollary 1. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then $$a_{p+1}\le \frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{(p+1)[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)}\,.$$

Theorem 2. The class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) is a class of convex functions.

Proof Let the functions

\begin{align}\tag{16}\label{eq1.16} f(z)=z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k} ,&& (a_k\ge 0,p=1,2,...), \end{align}
\begin{align}\tag{17}\label{eq1.17} g(z)=z^{p}-\sum_{k=p+1}^{\infty}b_{k}z^{k} ,&& (b_k\ge 0,p=1,2,...), \end{align}
be in the class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then for \(0\le j\le1\) $$h(z)=(1-j)f(z)+jg(z)=z^{p}-\sum_{k=p+1}^{\infty}c_{k}z^{k}\,,$$ where \(c_{k}=(1-j) a_{k}+j b_{k} \ge0\), then making use of (\ref{eq1.15}), we see that \begin{align*}\sum_{k=p+1}^{\infty}k&[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)c_k\\ &=\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)a_k+\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)b_k\\ &< (1-j)\delta(\lambda p+\alpha-\gamma)+j\delta(\lambda p+\alpha-\gamma)\\ &=\delta(\lambda p+\alpha-\gamma)-j\delta(\lambda p+\alpha-\gamma)+j\delta(\lambda p+\alpha-\gamma)\\ &=\delta(\lambda p+\alpha-\gamma),\end{align*} implies \(h(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), which completes the proof.

Theorem 3. If each of the functions \(f(z)\) and \(g(z)\) is in the class \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then \((f\ast g)(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\Omega)\), where \(\Omega \ge \frac{\delta^{2}(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)^{2}-\delta^{2}\lambda(\lambda p+\alpha-\gamma)}\).

Proof From (\ref{eq1.15}), we have

\begin{align}\tag{18}\label{eq1.18} \sum_{k=p+1}^{\infty}\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}a_{k}\le 1\,, \end{align}
and
\begin{align}\tag{19}\label{eq1.19} \sum_{k=p+1}^{\infty}\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}b_{k}\le 1\,. \end{align}
We need to find the smallest \(\Omega\) such that
\begin{align}\tag{20}\label{eq1.20} \sum_{k=p+1}^{\infty}\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\Omega\lambda)}{\Omega(\lambda p+\alpha-\gamma)}a_{k}b_{k}\le 1\,. \end{align}
From (\ref{eq1.18}) and (\ref{eq1.19}), we find by means of Cauchy-Schwarz inequalities that
\begin{align}\tag{21}\label{eq1.21} \sum_{k=p+1}^{\infty}\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\sqrt {a_{k}b_{k}}\le 1\,. \end{align}
Thus, it is enough to show that $$\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\Omega\lambda)}{\Omega(\lambda p+\alpha-\gamma)}a_{k}b_{k}\le \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\sqrt {a_{k}b_{k}}\,.$$ That is
\begin{align}\tag{22}\label{eq1.22} \sqrt{a_{k}b_{k}} \le \frac{\Omega(1+\delta \lambda)}{\delta(1+\Omega \lambda)}\,. \end{align}
On the other hand, from (\ref{eq1.21}), we have
\begin{align}\tag{23}\label{eq1.23} \sqrt{a_{k}b_{k}} \le \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}\,. \end{align}
Therefore, in view of (\ref{eq1.22}) and (\ref{eq1.23}), it is enough to show that $$\frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}\le \frac{\Omega(1+\delta \lambda)}{\delta(1+\Omega \lambda)},$$ i.e., $$ \delta(\lambda p+\alpha-\gamma)\delta(1+\Omega \lambda)\le kM(1+\delta\lambda)\Omega(1+\delta \lambda)\,,$$ where \(M=[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}\). So, $$\delta^{2}[(\lambda p+\alpha-\gamma)+\Omega \lambda(\lambda p+\alpha-\gamma)]\le k\Omega M(1+\delta \lambda)^{2}\,,$$ implies $$ \delta^{2}(\lambda p+\alpha-\gamma)+\delta^{2}\Omega \lambda(\lambda p+\alpha-\gamma) \le k\Omega M(1+\delta \lambda)^{2}\,,$$ implies \[ \delta^{2}(\lambda p+\alpha-\gamma) \le k\Omega M(1+\delta \lambda)^{2}-\delta^{2}\Omega \lambda(\lambda p+\alpha-\gamma)\,.\] Also \[\Omega\left[ k M(1+\delta \lambda)^{2}-\delta^{2} \lambda(\lambda p+\alpha-\gamma)\right] \ge \delta^{2}(\lambda p+\alpha-\gamma)\,,\] implies \[ \Omega \ge\frac{\delta^{2}(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)^{2}-\delta^{2}\lambda(\lambda p+\alpha-\gamma)}\,.\]

Theorem 4. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then \(f(z)\) is p-valently close-to-convex of order \(\rho\) in \(\left|z\right|< r_{1}(\lambda,\alpha,\gamma,\delta,\rho)\), where \begin{align*}r_{1}(\lambda,\alpha,\gamma,\delta,\rho)=\inf_{k}\left\lbrace\frac{[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)(p-\rho)}{\delta(\lambda p+\alpha-\gamma)} \right\rbrace^{\frac{1}{k-p}},& & (k\ge p+1)\,.\end{align*}

Proof Let \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then \[\left|\frac{f'(z)}{z^{p-1}}-p\right|< p-\rho\,,\] implies

\begin{align}\tag{24}\label{eq1.24} \left|\frac{pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1}-pz^{p-1}}{z^{p-1}}\right|=\left|\sum_{k=p+1}^{\infty}ka_{k}z^{k-p}\right|\le\sum_{k=p+1}^{\infty}ka_{k}\left|z\right|^{k-p}< p-\rho\,. \end{align}
Since
\begin{align}\tag{25}\label{eq1.25} \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)a_k \le \delta(\lambda p+\alpha-\gamma)\,, \end{align}
hence, (\ref{eq1.24}) is true if
\begin{align}\tag{26}\label{eq1.26} \frac{k\left|z\right|^{k-p}}{p-\rho}< \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\,. \end{align}
Solving (\ref{eq1.26}) for \(\left|z\right|\), we obtain \begin{align*} \left|z\right|< \left\lbrace\frac{[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)(p-\rho)}{\delta(\lambda p+\alpha-\gamma)} \right\rbrace^{\frac{1}{k-p}}, & & (k\ge p+1).\end{align*} Hence, the proof.

Theorem 5. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\),then \(f(z)\) is p-valently starlike of order \(\rho\) in \(\left|z\right|< r_{2}(\lambda,\alpha,\gamma,\delta,\rho)\), where \begin{align*} r_{2}(\lambda,\alpha,\gamma,\delta,\rho)=\inf_{k}\left\lbrace\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)(p-\rho)}{\delta(\lambda p+\alpha-\gamma)(k-p)} \right\rbrace^{\frac{1}{k-p}},& & (k\ge p+1).\end{align*}

Proof Let \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then \[\left|\frac{zf'(z)}{f(z)}-p\right|< p-\rho\,.\] The inequality

\begin{align}\tag{27} \left|\frac{zf'(z)}{f(z)}-p\right|& =\left|\frac{z(pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1})-p(z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k})}{z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k}}\right|\notag\\ \label{eq1.27}& =\left|\frac{pz^{p}-\sum_{k=p+1}^{\infty}ka_{k}z^{k}-pz^{p}+p\sum_{k=p+1}^{\infty}a_{k}z^{k})}{z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k}}\right|\,. \end{align}
Since
\begin{align}\tag{28}\label{eq1.28} \left|\frac{-\sum_{k=p+1}^{\infty}(k-p)ka_{k}z^{k}}{z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k}}\right|= \left|\frac{\sum_{k=p+1}^{\infty}(k-p)a_{k}z^{k-p}}{1-\sum_{k=p+1}^{\infty}a_{k}z^{k-p}}\right|\le \frac{\sum_{k=p+1}^{\infty}(k-p)a_{k}\left| z\right| ^{k-p}}{1-\sum_{k=p+1}^{\infty}a_{k}\left|z\right|^{k-p}}< p-\rho\,, \end{align}
i.e., \begin{align*} \sum_{k=p+1}^{\infty}ka_{k}\left| z\right| ^{k-p}-\sum_{k=p+1}^{\infty}pa_{k}\left| z\right| ^{k-p}&< (p-\rho)(1-\sum_{k=p+1}^{\infty}a_{k}\left|z\right|^{k-p})\\ & = \sum_{k=p+1}^{\infty}ka_{k}\left| z\right| ^{k-p}-\sum_{k=p+1}^{\infty}pa_{k}\left| z\right| ^{k-p}\end{align*} \begin{align*} & < p-\sum_{k=p+1}^{\infty}pa_{k}\left|z\right|^{k-p}-\rho+\sum_{k=p+1}^{\infty}\rho a_{k}\left|z\right|^{k-p}\\ & = \sum_{k=p+1}^{\infty}ka_{k}\left|z\right|^{k-p}-\sum_{k=p+1}^{\infty}\rho a_{k}\left|z\right|^{k-p}\\ &< p-\rho\\ & = \sum_{k=p+1}^{\infty}\frac {(k-\rho)a_{k}\left|z\right|^{k-p}}{p-\rho}\\& < 1.\end{align*} Since \[ \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)a_k \le \delta(\lambda p+\alpha-\gamma)\,. \] This holds true if \[\frac {(k-\rho)\left|z\right|^{k-p}}{p-\rho}< \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\,,\] \begin{align*} \left|z\right|< \left\lbrace\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)(p-\rho)}{(k-p)\delta(\lambda p+\alpha-\gamma)} \right\rbrace^{\frac{1}{k-p}} , & & (k\ge p+1),\end{align*} hence, the proof.

Theorem 6. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then \(f(z)\) is p-valently convex of order \(\rho\) in \(\left|z\right|< r_{3}(\lambda,\alpha,\gamma,\delta,\rho)\), where \begin{align*} r_{3}(\lambda,\alpha,\gamma,\delta,\rho)=\inf_{k}\left\lbrace\frac{[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)p(p-\rho)}{\delta(\lambda p+\alpha-\gamma)(k-p)} \right\rbrace^{\frac{1}{k-p}}, & & (k\ge p+1).\end{align*}

Proof Let \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then \[\left|1+\frac{zf''(z)}{f'(z)}-p\right|< p-\rho\,.\] The inequality \begin{align*} & \left|1+\frac{zf''(z)}{f'(z)}-p\right|\\ & =\left|\frac{pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1}+z(p(p-1)z^{p-2}-\sum_{k=p+1}^{\infty}k(k-1)a_{k}z^{k-2})-p(pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1})}{pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1}}\right|\\ & =\left|\frac{pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1}+p(p-1)z^{p-1}-\sum_{k=p+1}^{\infty}k(k-1)a_{k}z^{k-1})-p^{2}z^{p-1}+\sum_{k=p+1}^{\infty}pka_{k}z^{k-1})}{pz^{p-1}-\sum_{k=p+1}^{\infty}ka_{k}z^{k-1}}\right|\\ & =\left|\frac{-\sum_{k=p+1}^{\infty}k(k-p)a_{k}z ^{k-p}}{p-\sum_{k=p+1}^{\infty}ka_{k}z^{k-p}}\right|\\ & \le \frac{\sum_{k=p+1}^{\infty}k(k-p)a_{k}\left| z\right| ^{k-p}}{p-\sum_{k=p+1}^{\infty}ka_{k}\left|z\right|^{k-p}}\\ & < p-\rho.\end{align*} So, \[\sum_{k=p+1}^{\infty}k(k-p)a_{k}\left| z\right| ^{k-p}< (p-\rho)(p-\sum_{k=p+1}^{\infty}ka_{k}\left|z\right|^{k-p})\,,\] implies \[\sum_{k=p+1}^{\infty}k(k-p)a_{k}\left| z\right| ^{k-p}< p(p-\rho)-(p-\rho)\sum_{k=p+1}^{\infty}ka_{k}\left|z\right|^{k-p})\,,\] implies \[\sum_{k=p+1}^{\infty}k(k-\rho)a_{k}\left| z\right|^{k-p}< p(p-\rho)\,.\] Since \( \sum\limits_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)a_k \le \delta(\lambda p+\alpha-\gamma) \). This is true if \[\frac{k(k-\rho)\left| z\right|^{k-p}}{p(p-\rho)}< \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\,,\] \begin{align*} \left|z\right|< \left\lbrace\frac{[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)p(p-\rho)}{(k-p)\delta(\lambda p+\alpha-\gamma)} \right\rbrace^{\frac{1}{k-p}},& & (k\ge p+1),\end{align*} hence, the proof.

Theorem 7. Let

\begin{align}\tag{29}\label{eq1.29} f_{p}(z)=z^{p} , f_{k}(z)=z^{p}-\frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k},& & (k\ge p+1), \end{align}
then, \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) if and only if it can be expressed in the form \[f(z)=\lambda_{p}f_{p}(z)+\sum_{k=p+1}^{\infty}\lambda_{k}f_{k}(z)\,,\] where \(\lambda_{p}\ge 0\) and \(\lambda_{p}\)=\(1 - \sum\limits_{k=p+1}^{\infty}\lambda_{k}\).

Proof Assume that \(f(z)=\lambda_{p}f_{p}(z)+\sum_{k=p+1}^{\infty}\lambda_{k}f_{k}(z)\), then

\begin{align}\tag{30}\label{eq1.30} f(z)=(1 - \sum_{k=p+1}^{\infty}\lambda_{k})z^{p}+\sum_{k=p+1}^{\infty}\lambda_{k}\left\{ z^{p}-\frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k}\right\}\,, \end{align}
implies \[f(z)=z^{p}-\sum_{k=p+1}^{\infty}\lambda_{k}\left\{ \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}\right\}z^{k}\,.\] Thus, \begin{align*} &\sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\lambda_{k} \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}\\ &= \delta(\lambda p+\alpha-\gamma) \sum_{k=p+1}^{\infty}\lambda_{k} = \delta(\lambda p+\alpha-\gamma)(1-\lambda_{p}) \le \delta(\lambda p+\alpha-\gamma)\,,\end{align*} which shows that \(f(z)\) satisfies condition (\ref{eq1.15}) and therefore, \(f\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta).\) Conversely, suppose that \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), since \begin{align*} a_{k} \le \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)},& & (k\ge p+1),\end{align*} we may set \begin{align*} & \lambda_{k}=\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}a_{k},\\ & \lambda_{p}=1- \sum_{k=p+1}^{\infty}\lambda_{k}\,,\end{align*} then we obtain from \(f(z)= z^{p}-\sum_{k=p+1}^{\infty}a_{k}z^{k}\), \[ f(z)=(\lambda_{p}+\sum_{k=p+1}^{\infty}\lambda_{k})z^{p}- \sum_{k=p+1}^{\infty}\lambda_{k} \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k}\,,\] i.e., \[ f(z)=\lambda_{p}z^{p}+ \sum_{k=p+1}^{\infty}\lambda_{k}(z^{p}- \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k})\,,\] implies \[ f(z)=\lambda_{p}z^{p} + \sum_{k=p+1}^{\infty}\lambda_{k}f_{k}(z)\,.\] This completes the proof.

Corollary 2. The extreme points of \(S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) are given by; \begin{align*} & f_{p}(z)=z^{p}, &\\ &f_{k}(z)=z^{p} - \frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k} , & (k\ge p+1).\end{align*}

Theorem 8. Let \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) and suppose that

\begin{align}\tag{31}\label{eq1.31} \sum_{j=p+1}^{\infty}(j-q)_{q+1}a_{j} \le \frac{\delta(\lambda p+\alpha-\gamma) \Gamma (k+1)\Gamma (2+p-l-q)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (k+1-l-q)\Gamma (p+1-q)}\,, \end{align}
for some \(0\le q \le j\), \(0\le l< 1\), \((j-q)_{q+1}\) denotes the pochhammer symbol defined by \((j-q)_{q+1}=(j-q)(j-q+1)...j.\) Also, let the function
\begin{align}\tag{32}\label{eq1.32} f_{k}(z)=z^{p}-\frac{\delta(\lambda p+\alpha-\gamma)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}z^{k} (k\ge p+1)\,. \end{align}
If there exists an analytic function \(w(z)\) defined by
\begin{align}\tag{33}\label{eq1.33} (w(z))^{k-p}=\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma (k+1-l-q)}{\Gamma (k+1)}\sum_{j=p+1}^{\infty}(j-q)_{q+1\psi (j) a_{j}z^{j-p}}\,, \end{align}
with \((k\ge q)\) \begin{align*}\Psi (j) =\frac{\Gamma (j-q)}{\Gamma (j+1-l-q)},& & (0\le l< 1,j\ge p+1),\end{align*} then, for \(\sigma >0\) and \(z=re^{i\theta}\), \((0 < r< 1)\),
\begin{align}\tag{34}\label{eq1.34} \int_{0}^{2\pi}\left|D_{z}^{q+l}f(z)\right|^{\sigma}d\theta \le \int_{0}^{2\pi}\left|D_{z}^{q+l}f_{k}(z)\right|^{\sigma}d\theta\,. \end{align}

Proof Let \(f(z)=z^{p}-\sum_{j=p+1}^{\infty}a_{j}z^{j}.\) By means of (\ref{eq1.12}) and Definition 6, we have

\begin{align} D_{z}^{q+l}f(z)& =\frac{\Gamma (p+1)z^{p-l-q}}{\Gamma (p+1-l-q)}-\sum_{j=p+1}^{\infty}\frac{\Gamma (j+1)}{\Gamma (j+1-l-q)}a_{j}z^{j-l-q}\notag\\ &= \frac{\Gamma (p+1)z^{p-l-q}}{\Gamma (p+1-l-q)}[1-\sum_{j=p+1}^{\infty}\frac{\Gamma (j+1)\Gamma (p+1-l-q)}{\Gamma (p+1)\Gamma (j+1-l-q)}a_{j}z^{j-p}]\tag{35}\\ \label{eq1.35} &= \frac{\Gamma (p+1)z^{p-l-q}}{\Gamma (p+1-l-q)}[1-\sum_{j=p+1}^{\infty}\frac{\Gamma (p+1-l-q)}{\Gamma (p+1) }(j-q)_{q+1}\Psi(j)a_{j}z^{j-p}\tag{36}]\,, \end{align}
where \( \Psi(j)=\frac{\Gamma (j-q)}{\Gamma (j+1-l-q)},\;\;\;\;(0\le l< 1, j\ge p+1).\) Since \(\psi\) is a decreasing function of \(j\), we get \[ 0< \Psi(j) \le \Psi(p+1)=\frac{\Gamma (p+1-q)}{\Gamma (2+p-l-q)}\,.\] Similarly, from (\ref{eq1.32}), (\ref{eq1.14}), and Definition 6, we have
\begin{align} D_{z}^{q+l}f_{k}(z)&=\frac{\Gamma (p+1)z^{p-l-q}}{\Gamma (p+1-l-q)}-\frac{\delta(\lambda p+\alpha-\gamma)\Gamma (k+1)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (k+1-l-q)}z^{k-l-q}\notag\\ \label{eq1.36} &=\frac{\Gamma (p+1)z^{p-l-q}}{\Gamma (p+1-l-q)}[1-\frac{\delta(\lambda p+\alpha-\gamma)\Gamma(k+1)\Gamma (p+1-l-q)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (p+1)\Gamma (k+1-l-q)}z^{k-p}]\,\tag{37}. \end{align}
For some \(\sigma>0\) and \(z=re^{i\theta}\), \((0 < r< 1)\), we show that
\begin{align}\label{eq1.37} \int_{0}^{2\pi}&\left|1-\sum_{j=p+1}^{\infty}\frac{\Gamma (p+1-l-q)}{\Gamma (p+1)}(j-q)_{q+1}\psi(j)a_{j}z^{j-p}\right|^{\sigma}d(\theta)\notag\\ &\le \int_{0}^{2\pi}\left|1-\frac{\delta(\lambda p+\alpha-\gamma)\Gamma (k+1)\Gamma (p+1-l-q)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (p+1)\Gamma (k+1-l-q)}z^{k-p}\right|^{\sigma}d(\theta)\,,\tag{38} \end{align}
so, by applying Lemma 1, it is enough to show that
\begin{align}\label{eq1.38} 1-&\sum_{j=p+1}^{\infty}\frac{\Gamma (p+1-l-q)}{\Gamma (p+1)}(j-q)_{q+1}\psi(j)a_{j}z^{j-p} \notag\\ &\prec 1-\frac{\delta(\lambda p+\alpha-\gamma)\Gamma (k+1)\Gamma (p+1-l-q)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (p+1)\Gamma (k+1-l-q)}z^{k-p}\,\tag{39}. \end{align}
If the above subordination holds true, then we have an analytic function \(w(z)\) with \(w(0)=0\), \(|w(z)|< 1\), such that
\begin{align}\label{eq1.39} 1-&\sum_{j=p+1}^{\infty}\frac{\Gamma (p+1-l-q)}{\Gamma (p+1)}(j-q)_{q+1}\psi(j)a_{j}z^{j-p} \notag\\ &= 1-\frac{\delta(\lambda p+\alpha-\gamma)\Gamma (k+1)\Gamma (p+1-l-q)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (p+1)\Gamma (k+1-l-q)}w(z)^{k-p}\,\tag{40}. \end{align}
By the condition of the Theorem, we define the function \(w(z)\) by
\begin{align}\tag{41}\label{eq1.40} (w(z))^{k-p}=\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma(k+1-l-q)}{\Gamma (k+1)}\sum_{j=p+1}^{\infty}(j-q)_{q+1}\psi(j)a_{j}z^{j-p}\,, \end{align}
which readily yields \(w(0)=0\). For such a function \(w(z)\), we have
\begin{align}\tag{42} \notag\left|(w(z))\right|^{k-p}&\le \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma(k+1-l-q)}{\Gamma (k+1)}\sum_{j=p+1}^{\infty}(j-q)_{q+1}\psi(j)a_{j}\left| z\right| ^{j-p} \\ \notag&\le \left| z\right| \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma(k+1-l-q)}{\Gamma (k+1)}\psi(p+1)\sum_{j=p+1}^{\infty}(j-q)_{q+1}a_{j}\\ \notag&= \left| z\right| \frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma(k+1-l-q)\Gamma(p+1-q)}{\Gamma (k+1)\Gamma(2+p-l-q)}\sum_{j=p+1}^{\infty}(j-q)_{q+1}a_{j}\\ \label{eq1.41} &\le \left|z\right|< 1. \end{align}
By means of the hypothesis of the theorem, the result is proved.

As a special case \(q=0\), we have following results from Theorem 8.

Corollary 3. Let \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) and suppose that

\begin{align}\tag{43}\label{eq1.42} \sum_{j=p+1}^{\infty}ja_{j} \le \frac{\delta(\lambda p+\alpha-\gamma)\Gamma (k+1)\Gamma (2+p-l)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda) \Gamma (k+1-l)\Gamma (p+1)},& & (j\ge p+1)\,, \end{align}
if there exists an analytic function \(w(z)\) defined by
\begin{align}\tag{44}\label{eq1.43} (w(z))^{k-p}=\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma (k+1-l)}{\Gamma (k+1)}\sum_{j=p+1}^{\infty}{j} \Psi (j) a_{j}z^{j-p}\,, \end{align}
with \begin{align*}\psi(j)=\frac{\Gamma(j)}{\Gamma(j+1-l)},& & (0\le l< 1, j\ge p+1)\,,\end{align*} then, for \(\sigma >0\) and \(z=re^{i\theta}\),\;\;\; \((0 < r < 1)\)
\begin{align}\tag{45}\label{eq1.44} \int_{0}^{2\pi}\left|D_{z}^{l}f(z)\right|^{\sigma}d\theta \le \int_{0}^{2\pi}\left|D_{z}^{l}f_{k}(z)\right|^{\sigma}d\theta\,. \end{align}
Letting \(q=1\), we have the following from Theorem 8.

Corollary 4. Let \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\) and suppose that

\begin{align}\tag{46}\label{eq1.45} \sum_{j=p+1}^{\infty}j(j-1)a_{j} \le \frac{\delta(\lambda p+\alpha-\gamma)\Gamma (k+1)\Gamma (p+1-l)}{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)\Gamma (k-l)\Gamma (p)}, & &(j\ge p+1)\,, \end{align}
if there exists an analytic function \(w(z)\) define by
\begin{align}\tag{47}\label{eq1.46} (w(z))^{k-p}=\frac{k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)}{\delta(\lambda p+\alpha-\gamma)}\frac{\Gamma (k-l)}{\Gamma (k+1)}\sum_{j=p+1}^{\infty}j(j-1) \Psi(j) a_{j}z^{j-p} \,,\end{align}
with \begin{align*} \psi(j)=\frac{\Gamma(j-1)}{\Gamma(j-l)},& & (0\le l< 1, j\ge p+1)\,,\end{align*} then, for \(\sigma >0\) and \(z=re^{i\theta}\), \((0 < r < 1)\)
\begin{align}\tag{48}\label{eq1.47} \int_{0}^{2\pi}\left|D_{z}^{1+l}f(z)\right|^{\sigma}d\theta \le \int_{0}^{2\pi}\left|D_{z}^{1+l}f_{k}(z)\right|^{\sigma}d\theta,& & (0\le l < 1). \end{align}

Theorem 9. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then we have \[\left|D_{z}^{-l}f(z)\right|\le\frac{\Gamma(p+1)}{\Gamma(p+l+1)}\left|z\right|^{p+l}\left[1+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+l+1)}\left|z\right|\right]\,,\] and

\begin{align}\tag{49}\label{eq1.48} \left|D_{z}^{-l}f(z)\right|\ge\frac{\Gamma(p+1)}{\Gamma(p+l+1)}\left|z\right|^{p+l}\left[1-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+l+1)}\left|z\right|\right]\,. \end{align}

Proof Suppose that \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), using Theorem 1, we find that \[ \sum_{k=p+1}^{\infty}k[1+(\frac{k}{p}+\beta-\mu-1)t]^{n}(1+\delta\lambda)a_k \le \delta(\lambda p+\alpha-\gamma)\,, \] implies

\begin{align}\tag{50}\label{eq1.49} \frac{(p+1)[p+(p(\beta-\mu)+1)t]^{n}(1+\delta\lambda)}{p^{n}}\sum_{k=p+1}^{\infty}a_k \le \delta(\lambda p+\alpha-\gamma)\,, \end{align}
i.e.,
\begin{align}\tag{51}\label{eq1.50} \sum_{k=p+1}^{\infty}a_k \le \frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{(p+1)[p+(p(\beta-\mu)+1)t]^{n}(1+\delta\lambda)}\,. \end{align}
From (\ref{eq1.7}), if \(f(z)=z^{p}-\sum\limits_{k=p+1}^{\infty}a_{k}z^{k}\), \[D_{z}^{-l}f(z)=\frac{\Gamma(p+1)}{\Gamma(p+l+1)}z^{p+l}-\sum_{k=p+1}^{\infty}\frac{\Gamma(k+1)}{\Gamma(k+l+1)}a_{k}z^{k+l}\,,\] implies
\begin{align}\tag{52}\label{eq1.51} \frac {\Gamma(p+l+1)}{\Gamma(p+1)}z^{-l}D_{z}^{-l}f(z)=z^{p}-\sum_{k=p+1}^{\infty}\frac{\Gamma(k+1)\Gamma(p+l+1)}{\Gamma(p+1)\Gamma(k+l+1)}a_{k}z^{k}= z^{p}-\sum_{k=p+1}^{\infty}\Psi(k)a_{k}z^{k}\,, \end{align}
where
\begin{align}\tag{53}\label{eq1.52} \Psi(k)=\frac{\Gamma(k+1)\Gamma(p+l+1)}{\Gamma(p+1)\Gamma(k+l+1)}\,. \end{align}
Clearly, \(\psi\) is a decreasing function of \(k\) and we get \[0< \Psi(k)\le\Psi(p+1)=\frac{p+1}{p+l+1}\,.\] Using (\ref{eq1.50}) and (\ref{eq1.52}), we obtain, \begin{align*} \left|\frac{\Gamma(p+l+1)}{\Gamma(p+1)}z^{-l}D_{z}^{-l}f(z)\right|&\le\left|z\right|^{p}+\psi(p+1)\left|z\right|^{p+1}\sum_{k=p+1}^{\infty}a_{k}\\ &\le \left|z\right|^{p}+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+l+1)}\left|z\right|^{p+1}\,,\end{align*} which is equivalent to assertion (\ref{eq1.48}) and \begin{align*} \left|\frac{\Gamma(p+l+1)}{\Gamma(p+1)}z^{-l}D_{z}^{-l}f(z)\right|&\ge\left|z\right|^{p}-\psi(p+1)\left|z\right|^{p+1}\sum_{k=p+1}^{\infty}a_{k}\\ &\ge \left|z\right|^{p}-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+l+1)}\left|z\right|^{p+1}\,,\end{align*} which completes the proof.

Theorem 10. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then we have \[\left|D_{z}^{l}f(z)\right|\le\frac{\Gamma(p+1)}{\Gamma(p-l+1)}\left|z\right|^{p-l}\left[1+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p-l+1)}\left|z\right|\right]\,,\] and

\begin{align}\tag{54}\label{eq1.53} \left|D_{z}^{l}f(z)\right|\ge\frac{\Gamma(p+1)}{\Gamma(p-l+1)}\left|z\right|^{p-l}\left[1-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p-l+1)}\left|z\right|\right]\,. \end{align}

Proof If \(f(z)=z^{p}-\sum\limits_{k=p+1}^{\infty}a_{k}z^{k}\), then \[D_{z}^{l}f(z)=\frac{\Gamma(p+1)}{\Gamma(p-l+1)}z^{p-l}-\sum_{k=p+1}^{\infty}\frac{\Gamma(k+1)}{\Gamma(k-l+1)}a_{k}z^{k-l}\,,\] implies

\begin{align}\tag{55}\label{eq1.54} \frac {\Gamma(p-l+1)}{\Gamma(p+1)}z^{l}D_{z}^{l}f(z)=z^{p}-\sum_{k=p+1}^{\infty}\frac{\Gamma(k+1)\Gamma(p-l+1)}{\Gamma(p+1)\Gamma(k-l+1)}a_{k}z^{k}= z^{p}-\sum_{k=p+1}^{\infty}\Psi(k)a_{k}z^{k}\,, \end{align}
where
\begin{align}\tag{56}\label{eq1.55} \Psi(k)=\frac{\Gamma(p-l+1)\Gamma(k+1)}{\Gamma(p+1)\Gamma(k-l+1)}\,. \end{align}
Clearly, \(\Psi\) is a decreasing function of \(k\) and we get \[0< \Psi(k)\le\Psi(p+1)=\frac{p+1}{p-l+1}\,.\] Using (\ref{eq1.50}) and (\ref{eq1.55}), we obtain, \begin{align*} \left|\frac{\Gamma(p-l+1)}{\Gamma(p+1)}z^{l}D_{z}^{l}f(z)\right|&\le\left|z\right|^{p}+\psi(p+1)\left|z\right|^{p+1}\sum_{k=p+1}^{\infty}a_{k}\\ &\le \left|z\right|^{p}+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p-l+1)}\left|z\right|^{p+1}\,,\end{align*} which is equivalent to assertion (\ref{eq1.53}). Similarly, \begin{align*} \left|\frac{\Gamma(p-l+1)}{\Gamma(p+1)}z^{l}D_{z}^{l}f(z)\right|&\ge\left|z\right|^{p}-\Psi(p+1)\left|z\right|^{p+1}\sum_{k=p+1}^{\infty}a_{k}\\ &\ge \left|z\right|^{p}-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p-l+1)}\left|z\right|^{p+1}\,, \end{align*} which completes the proof.

Corollary 5. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then we have

\begin{align}\tag{57}\label{eq1.56} \notag \left|z\right|^{p}-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+1)}\left|z\right|^{p+1}&\le\left|f(z)\right|\\ &\le \left|z\right|^{p}+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+1)}\left|z\right|^{p+1}\,. \end{align}

Proof From Definition 4, we have \[\lim_{l\to 0} D_{z}^{-l}f(z)=f(z)\,.\] Therefore, setting \(l=0\) in (\ref{eq1.48}), we obtain \[\left|D_{z}^{0}f(z)\right|\le\frac{\Gamma(p+1)}{\Gamma(p+0+1)}\left|z\right|^{p+0}\left[1+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+0+1)}\left|z\right|\right]\,,\] and \begin{align*}%\label{eq1.57} \left|D_{z}^{0}f(z)\right|\ge\frac{\Gamma(p+1)}{\Gamma(p+0+1)}\left|z\right|^{p+0}\left[1-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+0+1)}\left|z\right|\right]\,, \end{align*} i.e., \begin{align*}%\label{eq1.58} \left|z\right|^{p}-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+1)}\left|z\right|^{p+1}&\le\left|f(z)\right|\notag\\ &\le \left|z\right|^{p}+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p+1)}\left|z\right|^{p+1}\,, \end{align*} which is (57).

Corollary 6. If \(f(z)\in S^{n}_{p}(\lambda,\alpha,\gamma,\delta)\), then we have

\begin{align}\tag{58}p\left|z\right|^{p-1}-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)}\left|z\right|^{}\le\left|f'(z)\right| \label{eq1.59} \le p\left|z\right|^{p-1}+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)}\left|z\right|^{p}\,. \end{align}

Proof From Definition 4, we have \[\lim_{l\to 1} D_{z}^{l}f(z)=f'(z)\,.\] Therefore, setting \(l=1\) in (\ref{eq1.53}), we obtain \[\left|D_{z}^{1}f(z)\right|\le\frac{\Gamma(p+1)}{\Gamma(p)}\left|z\right|^{p-1}\left[1+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p)}\left|z\right|\right]\,,\] and \begin{align*} \left|D_{z}^{1}f(z)\right|\ge\frac{\Gamma(p+1)}{\Gamma(p)}\left|z\right|^{p-1}\left[1-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)(p)}\left|z\right|\right]\,, \end{align*} i.e., \[ p\left|z\right|^{p-1}-\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)}\left|z\right|^{p}\le\left|f'(z)\right| \le p\left|z\right|^{p-1}+\frac{\delta(\lambda p+\alpha-\gamma)p^{n}}{[p+(p(\beta-\mu)+1)t]^{n}(1+\delta \lambda)}\left|z\right|^{p}\,,\] which is (\ref{eq1.59}).

Acknowledgments :

The authors acknowledge the management of the University of Ilorin for providing us with a suitable research laboratory and library to enable us carried out this research.

Conflicts of Interest:

''The author declares no conflict of interest.''

Data Availability:

All data required for this research is included within this paper.

Funding Information:

No funding is available for this research.

References

  1. Timothy, O. O. (2017). On a subclass of Univalent Functions defined by a Generalized Differential operator. International Journal of Mathematical Analysis, 11(18), 869-876.[Google Scholor]
  2. Sàlxaxgean, G. S. (1981). Subclasses of Univalent Functions, Complex Analysis-Fifth Romanian-Finnish Seminar, Part 1 (Bucharest, 1981). Lecture Notes in Math, 362-372.[Google Scholor]
  3. Al-Oboudi, F. M. (2004). On univalent functions defined by a generalized Salagean operator. International Journal of Mathematics and Mathematical Sciences, 2004(27), 1429-1436.[Google Scholor]
  4. Bulut, S. (2010). On a class of analytic and multivalent functions with negative coefficients defined by Al-Oboudi differential operator. Studia Universitatis Babes-Bolyai Mathematica, 55(4), 115-130.[Google Scholor]
  5. Owa, S. (1978). On the distortion theorems I. Kyungpook Mathematical Journal, 18(1), 53-59.[Google Scholor]
  6. Srivastava, H. M., & Owa, S. (Eds.). (1989). Univalent Functions, Fractional Calculus, and their Applications. Chichester: Ellis Horwood; New York; Toronto: Halsted Press.[Google Scholor]
  7. Littlewood, J. E. (1925). On inequalities in the theory of functions. Proceedings of the London Mathematical Society, 2(1), 481-519.[Google Scholor]
  8. Elumalai, M. (2016). On some coefficient estimates for certain subclass of analytic and multivalent functions. IOSR Journal of Maths, 12(6), 58-65.[Google Scholor]
  9. Eker, S. S., & Seker, B. (2007). On a class of multivalent functions defined by Salagean operator. General Mathematics, 15(2-3), 154-163.[Google Scholor]
  10. Deniz, E. (2012). On \(p\)-valently close-to-convex, starlike and convex functions. Hacettepe Journal of Mathematics and Statistics, 41(5), 635-642.[Google Scholor]
  11. Kareem Oleiwi, A., M Abdulkadhim, M., & J Alhamadany, F. (2017). subclass of multivalent functions defined by using differential operator. Journal of Kerbala University, 13(1), 11-20.[Google Scholor]
  12. Mahzoon, H., & Latha, S. (2010). On certain classes of \(p\)-valent functions defined by salagean. General Mathematics, 18(4),53-60.[Google Scholor]
  13. Mahzoon, H. (2012). New subclasses of multivalent functions defined by dif-ferential Subordination. Applied Mathematical Sciences, 6(95), 1501-1507.[Google Scholor]
  14. Mostafa, A. O., & Aouf, M. K. (2009). Neighborhoods of certain \(p\)-valent analytic functions with complex order. Computers & Mathematics with Applications, 58(6), 1183-1189.[Google Scholor]
  15. Salim, T. O. (2011). Certain classes of multivalent functions defined by a fractional differential operator. General Mathematics, 19(3), 75-84.[Google Scholor]
  16. Akbulut, S., Kadioglu, E., & Özdemir, M. (2004). On the subclass of \(p\)-valently functions. Applied Mathematics and Computation, 147(1), 89-96.[Google Scholor]
  17. Thirucheran, D. M., & Stalin, T. (2018A). New subclass of multivalent functions Defined by Al-Oboudi Differential operator. International Journal of Pure and Applied Mathematics, 119(16), 661-669.[Google Scholor]
  18. Thirucheran, D. M., & Stalin, T. (2018). On a New Subclass of Multivalent Functions Defined by Al-Oboudi Differential Operator. Global Journal of Pure and Applied Mathematics, 14(5), 733-741.[Google Scholor]
]]>
Results of semigroup of linear operators generating a nonlinear Schrödinger equation https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/results-of-semigroup-of-linear-operators-generating-a-nonlinear-schrodinger-equation/ Fri, 30 Dec 2022 17:30:05 +0000 https://old.pisrt.org/?p=6932
OMA-Vol. 6 (2022), Issue 2, pp. 29 - 34 Open Access Full-Text PDF
J. B. Omosowon, A. Y. Akinyele and F. Y. Aderibigbe
Abstract:In this paper, we present results of \(\omega\)-order preserving partial contraction mapping generating a nonlinear Schr\"odinger equation. We used the theory of semigroup to generate a nonlinear Schr\(\ddot{o}\)dinger equation by considering a simple application of Lipschitz perturbation of linear evolution equations. We considered the space \(L^2(\mathbb{R}^2)\) and of linear operator \(A_0$ by $D(A_0)=H^2(\mathbb{R}^2)\) and \(A_0u=-i\Delta u\) for \(u\in D(A_0)\) for the initial value problem, we hereby established that \(A_0\) is the infinitesimal generator of a \(C_0\)-semigroup of unitary operators \(T(t)\), \(-\infty]]>

Open Journal of Mathematical Analysis

Results of semigroup of linear operators generating a nonlinear Schrödinger equation

J. B. Omosowon\(^{1}\), A. Y. Akinyele\(^{1,*}\) and F. Y. Aderibigbe\(^1\)
\(^1\) Department of Mathematics, University of Ilorin, Ilorin, Nigeria.
Correspondence should be addressed to A. Y. Akinyele at olaakinyele04@gmail.com

Abstract

In this paper, we present results of \(\omega\)-order preserving partial contraction mapping generating a nonlinear Schr\”odinger equation. We used the theory of semigroup to generate a nonlinear Schr\(\ddot{o}\)dinger equation by considering a simple application of Lipschitz perturbation of linear evolution equations. We considered the space \(L^2(\mathbb{R}^2)\) and of linear operator \(A_0$ by $D(A_0)=H^2(\mathbb{R}^2)\) and \(A_0u=-i\Delta u\) for \(u\in D(A_0)\) for the initial value problem, we hereby established that \(A_0\) is the infinitesimal generator of a \(C_0\)-semigroup of unitary operators \(T(t)\), \(-\infty<t<\infty\) on \(L^2(\mathbb{R}^2)\).

Keywords:

\(\omega\)-\(OCP_n\); Evolution equation; \(C_0\)-semigroup; Schr\(\ddot{o}\)dinger equation.

1. Introduction

Consider the initial value problem for the following nonlinear Schrödinger equation in \(\mathbb{R}^2\)

\begin{equation} \left\{ \begin{array}{ll} \frac{1}{i}\frac{\partial u}{\partial t}-\Delta u +k|u|^2u=0, & \text{in}\ (0,\infty)\times\mathbb{R}^2,\\ u(x,0)=u_0(x) & \text{in}\ \mathbb{R}^2, \end{array} \right. \end{equation}
(1)
where \(u\) is a complex valued function and \(k\) a real constant. The space in which this problem is considered is \(L^2(\mathbb{R}^2)\). By defining the linear operator \(A_0\) by \(D(A_0)=H^2(\mathbb{R}^2)\) and \(A_0u=-i\Delta u\) for \(u\in D(A_0)\) and \(A\in\omega-OCP_n\), the initial value problem (1) can be rewritten as
\begin{equation}\label{12} \left\{ \begin{array}{lll} \frac{d u}{d t}+ A_0 u + F(u)=0, & \text{for} & t>0,\\ u(0)=u_0, & & \end{array} \right. \end{equation}
(2)
where \(F(u)=ik|u|^2u\).

It follows that the operator \(A_0\) is the infinitesimal generator of a \(C_0\)-semigroup of unitary operators \(T(t)\), \(-\infty< t< \infty\), on \(L^2(\mathbb{R}^2)\). A simple application of the Fourier transform gives the following explicit formula for \(T(t);\)

\begin{equation}\label{13} (T(t)u)(x)=\frac{1}{4\pi it}\int_{\mathbb{R}^2}\exp\left\{i\frac{|x-y|^2}{4t}\right\}u(y)dy\,. \end{equation}
(3)

Suppose \(X\) is a Banach space, \(H\) is Hilbert space, \(X_n\subseteq X\) is a finite set, \(\omega-OCP_n\) the \(\omega\)-order preserving partial contraction mapping, \(M_{m}\) be a matrix, \(L(X)\) be a bounded linear operator on \(X\), \(P_n\) a partial transformation semigroup, \(\rho(A)\) a resolvent set, \(\sigma(A)\) a spectrum of \(A\) and A is a generator of \(C_{0}\)-semigroup. This paper consists of results of \(\omega\)-order preserving partial contraction mapping generating a nonlinear Schrödinger equation.

Akinyele et al., [1], obtained a continuous time Markov semigroup of linear operators and also in [2], Akinyele et al., established results of \(\omega\)-order reversing partial contraction mapping generating a differential operator. Balakrishnan [3], presented an operator calculus for infinitesimal generators of the semigroup. Banach [4], established and introduced the concept of Banach spaces. Brezis and Gallouet [5] generated a nonlinear Schr\(\ddot{o}\)dinger evolution equation. Chill and Tomilov [6], introduced some resolvent approaches to stability operator semigroup. Davies [7] deduced linear operators and their spectra. Engel and Nagel [8] obtained a one-parameter semigroup for linear evolution equations. Omosowon et al., [9], generated some analytic results of the semigroup of the linear operator with dynamic boundary conditions, and also in [10], Omosowon et al., introduced dual properties of \(\omega\)-order reversing partial contraction mapping in semigroup of linear operator. Omosowon et al., [11], established a regular weak*-continuous semigroup of linear operators, and also in [12], Omosowon et al., generated quasilinear equations of evolution on semigroup of a linear operator. Pazy [13] presented the asymptotic behaviour of the solution of an abstract evolution and some applications and also, in [14], obtained a class of semi-linear equations of evolution. Rauf and Akinyele [15] obtained \(\omega\)-order preserving partial contraction mapping and obtained its properties, also in [16], Rauf et al., introduced some results of stability and spectra properties on semigroup of a linear operator. Vrabie [17], proved some results of \(C_{0}\)-semigroup and its applications. Yosida [18] deduced some results on differentiability and representation of one-parameter semigroup of linear operators.

2. Preliminaries

Definition 1.(\(C_0\)-Semigroup) [17] A \(C_0\)-Semigroup is a strongly continuous one parameter semigroup of bounded linear operator on Banach space.

Definition 2. (\(\omega\)-\(OCP_n\)) [15] A transformation \(\alpha\in P_n\) is called \(\omega\)-order preserving partial contraction mapping if \(\forall x,y \in~ \)Dom\(\alpha:x\le y~~\implies~~ \alpha x\le \alpha y\) and at least one of its transformation must satisfy \(\alpha y=y\) such that \(T(t+s)=T(t)T(s)\) whenever \(t,s>0\) and otherwise for \(T(0)=I\).

Definition 3.(Evolution Equation) [13] An evolution equation is an equation that can be interpreted as the differential law of the development (evolution) in time of a system. The class of evolution equations includes, first of all, ordinary differential equations and systems of the form \begin{equation*} u=f(t,u),u=f(t,u,u), \end{equation*} etc., in the case where \(u(t)\) can be regarded naturally as the solution of the Cauchy problem; these equations describe the evolution of systems with finitely many degrees of freedom.

Definition 4. (Mild Solution) [14] A continuous solution \(u\) of the integral equation.

\begin{equation}\label{21} u(t)=T(t-t_0)u_0 + \int_{t_0}^{t}T(t-s)f(s,u(s))ds \end{equation}
(4)
will be called a mild solution of the initial value problem
\begin{equation}\label{22} \left\{ \begin{array}{ll} \frac{du(t)}{dt}+Au(t)=f(t,u(t)),\ t>t_0,\\ u(t_0)=u_0, \end{array} \right. \end{equation}
(5)
if the solution is a Lipschitz continuous function.

Definition 5. (Schrödinger Equation) [19] The Schr\(\ddot{o}\)dinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject.

Example 1. \(2\times 2\) matrix \([M_m(\mathbb {R}^{n})]\): Suppose \[ A=\begin{pmatrix} 2&0\\ \Delta & 2\\ \end{pmatrix} \] and let \(T(t)=e^{t A}\), then \[ e^{t A}=\begin{pmatrix} e^{2t}& I\\e^{\Delta t} & e^{2t}\\ \end{pmatrix}. \]

Example 2. \(3\times 3\) matrix \([M_m(\mathbb{C})]\): We have for each \(\lambda>0\) such that \(\lambda\in \rho(A)\) where \(\rho(A)\) is a resolvent set on \(X\). Suppose we have \[ A=\begin{pmatrix} 2&2&I\\ 2&2&2\\ \Delta &2&2 \end{pmatrix} \] and let \(T(t)=e^{t A_\lambda}\), then \[ e^{t A_\lambda}=\begin{pmatrix} e^{2t\lambda}&e^{2t\lambda}& I\\ e^{2t\lambda}&e^{2t\lambda}&e^{2t\lambda}\\ e^{\Delta t\lambda}&e^{2t\lambda}&e^{2t\lambda}\end{pmatrix} .\]

Example 3. Let \(X=C_{ub}(\mathbb{N}\cup\{0\})\) be the space of all bounded and uniformly continuous function from \(\mathbb{N}\cup\{0\}\) to \(\mathbb{R}\), endowed with the sup-norm \(\|\cdot\|_\infty\) and let \(\{T(t); t \in \mathbb{R_{+}}\}\subseteq L(X)\) be defined by \[ [T(t)f](s)=f(t+s)\,. \] For each \(f\in X\) and each \(t,s\in \mathbb{R_+}\), one may easily verify that \(\{T(t); t \in \mathbb{R_{+}}\}\) satisfies Examples 1 and 2 above.

3. Main results

This section present results of semigroup of linear operator by using \(\omega\)-\(OCP_{n}\) to generates a nonlinear Schrödinger equation:

Theorem 1. Suppose \(A:D(A)\subseteq L^2(\mathbb{R}^2)\) is the infinitesimal generator of a semigroup \(\{T(t),\ t\geq0\}\) given by (3) where \(A\in\omega-OCP_n\). If \(2\leq p\leq \infty\) and \(\frac{1}{q}+\frac{1}{p}=1\), then \(T(t)\) can be extended in a unique way to an operator from \(L^q(\mathbb{R}^2)\) into \(L^p(\mathbb{R}^2)\) and

\begin{equation}\label{31} \|T(t)u\|_{0,p}\leq(4\pi t)^{-(\frac{2}{q}-1)}\|u\|_{0,q}. \end{equation}
(6)

Proof. Since \(T(t)\) is a unitary operator on \(L^2(\mathbb{R}^2)\) we have $$ \|T(t)u\|_{0,2}=\|u\|_{0,2}\quad for\ u\in L^2(\mathbb{R}^2). $$ On the other hand it is clear from (3) that \(T(t):L^1(\mathbb{R}^2)\to L^\infty(\mathbb{R}^2)\) and that for \(t>0\), we have $$ \|T(t)u\|_{0,\infty}\leq(4\pi t)^{-1}\|u\|_{0,1}. $$ The Riesz convexity theorem implies in this situation that \(T(t)\) can be extended uniquely to an operator from \(L^q(\mathbb{R}^2)\) into \(L^p(\mathbb{R}^2)\) and that (6) holds. In order to prove the existence of a local solution of the initial value problem (2) for every \(u\in H^2(\mathbb{R}^2)\) and \(A\in\omega-OCP_n\). We note that the graph norm of the operator \(A_0\) in \(L^2(\mathbb{R}^2)\), that is the norm \( \|u\|=\|u\|_{0,2} + \|A_0u\|\), for \(u\in D(A_0)\) and \(A\in\omega-OCP_n\) is equivalent to the norm \(\|\cdot\|_{2,2}\) in \(H^2(\mathbb{R}^2)\). Therefore \(D(A_0)\) equipped with the graph norm is the space \(H^2(\mathbb{R}^2)\). Hence the proof in competed.

Theorem 2. Assume \(A:D(A)\subseteq H^2(\mathbb{R}^2)\to H^2(\mathbb{R}^2)\) is the infinitesimal generator of a \(C_0\)-semigroup \(\{T(t);\ t\geq 0\}\). The nonlinear mapping \(Fu=ik|u|^2u\) maps \(H^2(\mathbb{R}^2)\) into itself and satisfies for \(u,v\in H^2(\mathbb{R}^2)\) and \(A\in\omega-OCP_n\), we have

\begin{equation}\label{32} \|F(u)\|_{2,2}\leq C\|u\|^2_{0,\infty}\|u\|_{2,2}\,, \end{equation}
(7)
\begin{equation}\label{33} \|F(u)-F(v)\|_{2,2}\leq C(\|u\|^2_{2,2}+\|v\|^2_{2,2})\|u-v\|_{2,2}\,. \end{equation}
(8)

Proof. From Sobolev's theorem in \(\mathbb{R}^2\), it follows that \(H^2(\mathbb{R}^2)\subset L^\infty(\mathbb{R}^2)\) and that there is a constant \(C\) such that

\begin{equation}\label{34} \|u\|_{0,\infty}\leq C\|u\|_{2,2}\quad for\ u\in H^2(\mathbb{R}^2)\,. \end{equation}
(9)
Denoting by \(D\) any first order differential operator we have for every \(u\in H^2(\mathbb{R}^2)\) $$ |D^2(|u|^2u)|\leq C(|u|^2|D^2u| + |u||Du|^2)\,, $$ and therefore
\begin{equation}\label{35} \||u|^2u\|_{2,2} \leq C(\|u\|^2_{0,\infty}\|u\|_{2,2} + \|u\|_{0,\infty} \|u\|^2_{1,4}). \end{equation}
(10)
From Gagliardo-Nirenberg inequalities we have
\begin{equation}\label{36} \|u\|_{1,4}\leq C\|u\|^{\frac{1}{2}}_{0,\infty}\|u\|^{\frac{1}{2}}_{2,2}. \end{equation}
(11)
Combining (10) and (11), we obtain (7). The inequality (8) is proved similarly using Leibnitz formula for the derivatives of of products and estimates (9) and (11), and this achieved the proof.

Theorem 3. Suppose \(A:D(A)\subseteq H^2(\mathbb{R}^2)\to H^2(\mathbb{R}^2)\) is the infinitesimal generator of a \(C_0\)-semigroup \(\{T(t);\ t\geq 0\}\). Let \(u_0\in H^2\mathbb{R}^2\), \(A\in\omega-OCP_n\) and \(u\) be the solution of initial value problem (2) on \([0,T)\). If \(K\geq 0\), then \(\|u(t)\|_{2,2}\) is bounded on \([0,T)\).

Proof. We will first show that \(\|u(t)\|_{1,2}\) is bounded on \([0,T)\). To this end we multiply the equation

\begin{equation}\label{37} \frac{1}{i}\frac{\partial u}{\partial t} -\Delta u+K|u|^2u=0\,, \end{equation}
(12)
by \(\overline{u}\) and integrate over \(\mathbb{R}^2\). Then taking the imaginary part of the result gives \(\frac{d}{dt}\|u\|^2_{0,2}=0\) and therefore,
\begin{equation}\label{38} \|u(t)\|_{0,2}=\|u_0\|_{0,2}\quad for\ 0\leq t\leq T. \end{equation}
(13)
Next we multiply (12) by \(\partial\overline{u}/\partial t\), integrate over \(\mathbb{R}^2\) and consider the real part of the result. This lead to
\begin{equation}\label{39} \frac{1}{2}\int_{\mathbb{R}^2}|\nabla u(t,x)|^2dx + \frac{K}{4}\int_{\mathbb{R}^2}|u(t,x)|^4dx=\frac{1}{2}\int_{\mathbb{R}^2}|\nabla u_0(x)|^2dx + \frac{K}{4}\int_{\mathbb{R}^2}|u_0(x)|^4dx. \end{equation}
(14)
Therefore, since \(K\geq 0\), then \(\|u\|_{1,2}\) is bounded on \([0,T)\). To prove that \(\|u(t)\|_{2,2}\) is bounded on \([0,T)\), we first note that from Sobolev's theorem it follows that \(H^1(\mathbb{R}^2)\subset L^p(\mathbb{R}^2)\) for \(p>2\) and that
\begin{equation}\label{310} \|v\|_{0,p}\leq C\|v\|_{1,2}\quad for\ v\in H^1(\mathbb{R}^2). \end{equation}
(15)
Therefore if \(u\) is the solution of (2) on \([0,T)\) it follows from the boundedness of \(\|u(t)\|_{1,2}\) on \([0,T)\) and (15) that
\begin{equation}\label{311} \|u(t)\|_{0,p}\leq C\quad for\ p>2,\ 0\leq t< T. \end{equation}
(16)
Since \(u\) is the solution of (2), it is also the solution of the integral equation
\begin{equation}\label{312} u(t)=T(t)u_0 - \int_{0}^{t}T(t-s)F(u(s))ds. \end{equation}
(17)
Denoting by \(D\) any first order derivative, we have
\begin{equation}\label{313} Du(t)=T(t)Du_0 - \int_{0}^{t}T(t-s)DF(u(s))ds. \end{equation}
(18)
We fix now \(p>2\) and let \(q=p/(p-1)\) and \(r=4p/(p-2)\). Then denoting by \(C\) a generic constant and using Theorem 1, (18) and Hölder's inequality, we find \begin{align*} \|Du(t)\|_{0,p}&\leq \|T(t)Du_0\|_{0,p} + C\int_{0}^{t}(t-s)^{1-\frac{2}{q}}\||u(s)|^2|Du(s)|\|_{0,q}ds\\ &\leq C\|u_0\|_{2,2} +C\int_{0}^{t}(t-s)^{1-\frac{2}{q}}\|u(s)\|_{0,r}\|Du(s)\|_{0,2}ds\\ &\leq C\|u_0\|_{2,2}+C\int_{0}^{t}(t-s)^{1-\frac{2}{q}}ds\leq C(t)\,, \end{align*} where in the last inequality we used the fact that \(r>2\) and therefore \(\|u(s)\|_{0,r}\leq C\) by (16) and that $$ \|Du(s)\|_{0,2}\leq C\|u(s)\|_{1,2}\leq C. $$ Therefore, \(\|u(t)\|_{1,p}\leq C\) and since by Sobolev's theorem, \(W^{1,p}(\mathbb{R}^2)\subset L^\infty(\mathbb{R}^2)\) for \(p>2\), it follows that $$ \|u(t)\|_{0,\infty}\leq C\quad for\ 0\leq t< T. $$ Finally, since \(T(t)\) is an isometry on \(L^2(\mathbb{R}^2)\) it follows from (17) that \begin{align*} \|u(t)\|_{2,2}&\leq\|T(t)u_0\|_{2,2} + \int_{0}^{t}\|T(t-s)F(u(s))\|_{2,2}ds\\ &\leq\|u_0\|_{2,2} + C\int_{0}^{t}\|u(s)\|^2_{0,\infty}\|u(s)\|_{2,2}ds\,, \end{align*} which by Gronwall's inequality implies the boundedness of \(\|u(t)\|_{2,2}\) on \([0,t)\) as desired. Hence the proof is completed.

4. Conclusion

In this paper, it has been established that \(\omega\)-order preserving partial contraction mapping generates some results of a nonlinear Schrödinger equation.

Acknowledgments :

The authors acknowledge the management of the University of Ilorin for providing us with a suitable research laboratory and library to enable us carried out this research.

Conflicts of Interest:

''The author declares no conflict of interest.''

References

  1. Akinyele, A. Y., Jimoh, O. E., Omosowon, J. B., & Bello, K. A. (2022). Results of semigroup of linear operator generating a continuous time Markov semigroup. Earthline Journal of Mathematical Sciences, 10(1), 97-108. [Google Scholor]
  2. Akinyele, A. Y., Abubakar, J. U., Bello, K. A., Alhassan, L. K., & Aasa, M. A. (2021). Results of \(\omega\)-order reversing partial contraction mapping generating a differential operator. Malaya Journal of Matematik, 9(3), 91-98. [Google Scholor]
  3. Balakrishnan, A. V. (1959). An operational calculus for infinitesimal generators of semigroups. Transactions of the American Mathematical Society, 91(2), 330-353. [Google Scholor]
  4. Banach, S. (1922). Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundamenta Mathematicae, 3(1), 133-181. [Google Scholor]
  5. Brezis, H., & Gallouet, T. (1980). Nonlinear Schröodinger evolution equations. Nonlinear Analysis, 4, 677 - 682. [Google Scholor]
  6. Chill, R., & Tomilov, Y. (2007). Stability Operator Semigroup. Banach Center Publication 75, Polish Academy of Sciences, Warsaw, 71-73.
  7. Davies, E. B. (2007). Linear Operators and their Spectra (Vol. 106). Cambridge University Press. [Google Scholor]
  8. Engel, K., & Nagel, R. (1999). One-parameter Semigroups for Linear Equations. Graduate Texts in Mathematics, 194. [Google Scholor]
  9. Omosowon, J. B., Akinyele, A. Y., Saka-Balogun, O. Y., & Ganiyu, M. A. (2020). Analytic results of semigroup of linear operator with dynamic boundary conditions. Asian Journal of Mathematics and Applications, 2020, Article ID ama0561, 10 pages. [Google Scholor]
  10. Omosowon, J. B., Akinyele, A. Y., & Jimoh, F. M. (2021). Dual properties of \(\omega\)-order reversing partial contraction mapping in semigroup of linear operator. Asian Journal of Mathematics and Applications, 2021, Article ID ama0566, 10 pages. [Google Scholor]
  11. Omosowon, J. B., Akinyele, A. Y., Bello, K. A., & Ahmed, B. M. (2022). Results of semigroup of linear operators generating a regular weak*-continuous semigroup. Earthline Journal of Mathematical Sciences, 10(2), 289-304. [Google Scholor]
  12. Omosowon, J. B., Akinyele, A. Y., Ahmed, B. M., & Saka-Balogun, O. Y. (2022). Results of semigroup of linear operator generating a quasilinear equations of evolution. Earthline Journal of Mathematical Sciences, 10(2), 409-421. [Google Scholor]
  13. Pazy, A. (1968). Asymptotic behavior of the solution of an abstract evolution equation and some applications. Journal of Differential Equations, 4(4), 493-509. [Google Scholor]
  14. Pazy, A. (1975). A class of semi-linear equations of evolution. Israel Journal of Mathematics, 20(1), 23-36. [Google Scholor]
  15. Rauf, K., & Akinyele, A. Y. (2019). Properties of \(\omega\)-order-preserving partial contraction mapping and its relation to \(C_0\)-semigroup. International Journal of Mathematics and Computer Science, 14(1), 61-68. [Google Scholor]
  16. Rauf, K., Akinyele, A. Y., Etuk, M. O., Zubair, R. O., & Aasa, M. A. (2019). Some result of stability and spectra properties on semigroup of linear operator. Advances in Pure Mathematics, 9(01), 43- 51. [Google Scholor]
  17. Vrabie, I. I. (2003). \(C_0\)-Semigroup and Application. Mathematics Studies, 191, Elsevier, North-Holland. [Google Scholor]
  18. Yosida, K. (1948). On the differentiability and the representation of one-parameter semi-group of linear operators. Journal of the Mathematical Society of Japan, 1(1), 15-21. [Google Scholor]
  19. Wikipedia. Schrödinger Equation. [Google Scholor]
]]>
On generalized Tetranacci numbers: Closed forms of the sum formulas \(\sum\limits_{k=0}^{n}kx^{k}W_{k}\) and \(\sum\limits_{k=1}^{n}kx^{k}W_{-k}\) https://old.pisrt.org/psr-press/journals/oma-vol-6-issue-2-2022/on-generalized-tetranacci-numbers-closed-forms-of-the-sum-formulas-sumlimits_k0nkxkw_k-and-sumlimits_k1nkxkw_-k/ Fri, 30 Dec 2022 17:14:39 +0000 https://old.pisrt.org/?p=6930
OMA-Vol. 6 (2022), Issue 2, pp. 1 - 28 Open Access Full-Text PDF
Yüksel Soykan, Erkan Taşdemir and Inci Okumuş
Abstract:In this paper, closed forms of the sum formulas \(\sum\limits_{k=0}^{n}kx^{k}W_{k}\) and \(\sum\limits_{k=1}^{n}kx^{k}W_{-k}\) for generalized Tetranacci numbers are presented. As special cases, we give summation formulas of Tetranacci, Tetranacci-Lucas, and other fourth-order recurrence sequences. ]]>

Open Journal of Mathematical Analysis

On generalized Tetranacci numbers: Closed forms of the sum formulas \(\sum\limits_{k=0}^{n}kx^{k}W_{k}\) and \(\sum\limits_{k=1}^{n}kx^{k}W_{-k}\)

Yüksel Soykan\(^1\), Erkan Taşdemir\(^{2,∗}\) and Inci Okumuş\(^3\)
\(^1\) Department of Mathematics, Art and Science Faculty, Zonguldak Bülent Ecevit University, 67100, Zonguldak, Turkey.
\(^2\) Pınarhisar Vocational School, Kırklareli University, 39300, Kırklareli, Turkey
\(^3\) Department of Engineering Sciences, Istanbul University-Cerrahpaşa, 34100, Istanbul, Turkey.
Correspondence should be addressed to Erkan Taşdemir at erkantasdemir@hotmail.com

Abstract

In this paper, closed forms of the sum formulas \(\sum\limits_{k=0}^{n}kx^{k}W_{k}\) and \(\sum\limits_{k=1}^{n}kx^{k}W_{-k}\) for generalized Tetranacci numbers are presented. As special cases, we give summation formulas of Tetranacci, Tetranacci-Lucas, and other fourth-order recurrence sequences.

Keywords:

Tetranacci numbers; Tetranacci-Lucas numbers; fourth order Pell numbers; sum formulas; summing formulas.

1. Introduction

There have been so many studies of the sequences of numbers in the literature which are defined recursively. Two of these types of sequences are the sequences of Tetranacci and Tetranacci-Lucas which are special cases of generalized Tetranacci numbers. A generalized Tetranacci sequence

\begin{equation*} \{W_{n}\}_{n\geq 0}=\{W_{n}(W_{0},W_{1},W_{2},W_{3};r,s,t,u)\}_{n\geq 0} \end{equation*} is defined by the fourth-order recurrence relations
\begin{equation} W_{n}=rW_{n-1}+sW_{n-2}+tW_{n-3}+uW_{n-4}, \label{equation:dcfvtsrewqsxazsae} \end{equation}
(1)
with the initial values \(W_{0},W_{1},W_{2},W_{3}\ \)are arbitrary complex (or real) numbers not all being zero and \(r,s,t,u\) are complex numbers.

This sequence has been studied by many authors and more detail can be found in the extensive literature dedicated to these sequences, see for example [1,2,3,4,5,6].

The sequence \(\{W_{n}\}_{n\geq 0}\) can be extended to negative subscripts by defining

\begin{equation*} W_{-n}=-\frac{t}{u}W_{-(n-1)}-\frac{s}{u}W_{-(n-2)}-\frac{r}{u}W_{-(n-3)}+ \frac{1}{u}W_{-(n-4)}\,, \end{equation*} for \(n=1,2,3,....\) Therefore, recurrence (1) holds for all integer \(n.\)

For some specific values of \(W_{0},W_{1},W_{2},W_{3}\) and \(r,s,t,u\), it is worth presenting these special Tetranacci numbers in a table as a specific name. In literature, for example, the following names and notations (see Table 1) are used for the special cases of \(r,s,t,u\) and initial values.

In literature, for example, the following names and notations (see Table 1) are used for the special case of \(r,s,t,u\) and initial values.

Table 1. A few special cases of generalized Tetranacci sequences.
No Sequences (Numbers) \(\text{Notation}\) OEIS [7] Ref.
1 Tetranacci \(\{M_{n}\}=\{W_{n}(0,1,1,2;1,1,1,1)\}\) A000078 [8]
2 Tetranacci-Lucas \(\{R_{n}\}=\{W_{n}(4,1,3,7;1,1,1,1)\}\) A073817 [8]
3 fourth order Pell \(\{P_{n}^{(4)}\}=\{W_{n}(0,1,2,5;2,1,1,1)\}\) A103142 [9]
4 fourth order Pell-Lucas \(\{Q_{n}^{(4)}\}=\{W_{n}(4,2,6,17;2,1,1,1)\}\) A331413 [9]
5 modified fourth order Pell \(\{E_{n}^{(4)}\}=\{W_{n}(0,1,1,3;2,1,1,1)\}\) A190139 [9]
6 fourth order Jacobsthal \(\{J_{n}^{(4)}\}=\{W_{n}(0,1,1,1;1,1,1,2)\}\) A007909 [10]
7 fourth order Jacobsthal-Lucas \(\{j_{n}^{(4)}\}=\{W_{n}(2,1,5,10;1,1,1,2)\}\) A226309 [10]
8 modified fourth order Jacobsthal \(\{K_{n}^{(4)}\}=\{W_{n}(3,1,3,10;1,1,1,2)\}\) [10]
9 fourth-order Jacobsthal Perrin \(\{Q_{n}^{(4)}\}=\{W_{n}(3,0,2,8;1,1,1,2)\}\) [10]
10 adjusted fourth-order Jacobsthal \(\{S_{n}^{(4)}\}=\{W_{n}(0,1,1,2;1,1,1,2)\}\) [10]
11 modified fourth-order Jacobsthal-Lucas \(\{R_{n}^{(4)}\}=\{W_{n}(4,1,3,7;1,1,1,2)\}\) [10]
12 4-primes \(\{G_{n}\}=\{W_{n}(0,0,1,2;2,3,5,7)\}\) [11]
13 Lucas 4-primes \(\{H_{n}\}=\{W_{n}(4,2,10,41;2,3,5,7)\}\) [11]
14 modified 4-primes \(\{E_{n}\}=\{W_{n}(0,0,1,1;2,3,5,7)\}\) [11]

Here OEIS stands for On-line Encyclopedia of Integer Sequences. For easy writing, from now on, we drop the superscripts from the sequences, for example we write \(J_{n}\) for \(J_{n}^{(4)}\).

We present some works on sum formulas of the numbers in the following Table 2.

Table 2. A few special studies of sum formulas.
Name of sequence Papers which deal with sum formulas
Pell and Pell-Lucas [12, 13, 14, 15, 16]
Generalized Fibonacci [17, 18, 19, 20, 21, 22, 23]
Generalized Tribonacci [24, 25, 26, 27]
Generalized Tetranacci [6,24, 28, 29]
Generalized Pentanacci [24, 30, 31]
Generalized Hexanacci [32, 33]

The following theorem present some linea sum formulas of generalized Tetranacci numbers with positive subscripts.

Theorem 1.[34, Theorem 1] For \(n\geq 0\) we have the following formulas:

  • (a) If \(rx+sx^{2}+tx^{3}+ux^{4}-1\neq 0 ,\) then \begin{equation*} \sum\limits_{k=0}^{n}x^{k}W_{k}=\frac{\Theta _{1}(x)}{rx+sx^{2}+tx^{3}+ux^{4}-1}. \end{equation*}
  • (b) If \( r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1\neq 0 \) then \begin{equation*} \sum\limits_{k=0}^{n}x^{k}W_{2k}=\frac{\Theta _{2}(x)}{ r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1}. \end{equation*}
  • (c) If \( r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1\neq 0 \) then \begin{equation*} \sum\limits_{k=0}^{n}x^{k}W_{2k+1}=\frac{\Theta _{3}(x)}{ r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1} \end{equation*}
where

\(\Theta _{1}(x)=x^{n+3}W_{n+3}-x^{n+2}\left( rx-1\right) W_{n+2}-x^{n+1}\left( sx^{2}+rx-1\right) W_{n+1}+ux^{n+4} W_{n}-x^{3}W_{3}+ x^{2}(rx-1)W_{2}+x(sx^{2}+rx-1)W_{1}+(tx^{3}+sx^{2}+rx-1) W_{0},\)

\(\Theta _{2}(x)=x^{n+1}\left( -ux^{2}-sx+1\right) W_{2n+2}+x^{n+2}(t+rs+rux)W_{2n+1}+ x^{n+2}(u+t^{2}x-u^{2}x^{2}+rt-sux)W_{2n}+ ux^{n+2}\left( r+tx\right) W_{2n-1} \)

\( -x^{2}(r+tx)W_{3}+ x(r^{2}x+ux^{2}+sx+rtx^{2}-1)W_{2} -x^{2}(t+rux-stx)W_{1}+ (r^{2}x+ux^{2}-s^{2}x^{2}+t^{2}x^{3}+2sx+2rtx^{2}-sux^{3}-1) W_{0},\)

\(\Theta _{3}(x)=x^{n+1}(r+tx)W_{2n+2}+ x^{n+1}(s-s^{2}x+t^{2}x^{2}-u^{2}x^{3}+ux-2sux^{2}+rtx) W_{2n+1}+ x^{n+1}(t+rux-stx)W_{2n} -ux^{n+1}(ux^{2}+sx-1)W_{2n-1}\)

\(+x(ux^{2}+sx-1)W_{3}-x^{2}(t+rs+rux)W_{2}+(r^{2}x+ux^{2}-s^{2}x^{2}+2sx+rtx^{2}-sux^{3}-1) W_{1} -ux^{2}(r+tx)W_{0}. \)

The following theorem present some linear sum formulas of generalized Tetranacci numbers with negative subscripts.

Theorem 2.[34, Theorem 8] Let \(x\) be a real or complex numbers. For \(n\geq 1\) we have the following formulas:

  • (a) If \(rx^{3}+sx^{2}+tx+u-x^{4}\neq 0,\) then \begin{equation*} \sum\limits_{k=1}^{n}x^{k}W_{-k}=\frac{\Theta _{4}(x)}{rx^{3}+sx^{2}+tx+u-x^{4}}. \end{equation*}
  • (b) If \( 2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux\neq 0\) then \begin{equation*} \sum\limits_{k=1}^{n}x^{k}W_{-2k}=\frac{x\Theta _{5}(x)}{ 2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux}. \end{equation*}
  • (c) If \( 2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux\neq 0\) then \begin{equation*} \sum\limits_{k=1}^{n}x^{k}W_{-2k+1}=\frac{x\Theta _{6}(x)}{ 2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux} \end{equation*}
where

\(\Theta _{4}(x)=-x^{n+1}W_{-n+3}+x^{n+1}(r-x)W_{-n+2}+ x^{n+1}(s+rx-x^{2})W_{-n+1}+ x^{n+1}(t+rx^{2}+sx-x^{3})W_{-n}+xW_{3}-x(r-x)W_{2}+ x(-s-rx+x^{2})W_{1}+x(-t-rx^{2}-sx+x^{3})W_{0},\)

\(\Theta _{5}(x)=x^{n}(u+sx-x^{2})W_{-2n+2}-x^{n}(ru+tx+rsx)W_{-2n+1}+ x^{n}(2sx^{2}-s^{2}x+r^{2}x^{2}-su+ux-x^{3}+rtx) W_{-2n}-ux^{n}(t+rx)W_{-2n-1}\)

\(+ (t+rx)W_{3}+ (-u-r^{2}x-rt-sx+x^{2})W_{2}+ (ru-st+tx)W_{1} -(2sx^{2}-s^{2}x+r^{2}x^{2}-su+ux+t^{2}-x^{3}+2rtx) W_{0},\)

\(\Theta _{6}(x)=-x^{n+1}(t+rx)W_{-2n+2}+ x^{n+1}(u+r^{2}x+rt+sx-x^{2})W_{-2n+1}-x^{n+1}(ru-st+tx)W_{-2n} \)

\(+ ux^{n}(u+sx-x^{2})W_{-2n-1}+(-u-sx+x^{2}) W_{3}+(ru+tx+rsx) W_{2}+ (-2sx^{2}+s^{2}x-r^{2}x^{2}+su-ux+x^{3}-rtx) W_{1}+ u(t+rx)W_{0}.\)

In this work, we investigate linear summation formulas of generalized Tetranacci numbers.

2. Linear sum formulas of generalized Tetranacci numbers with positive subscripts

The following theorem present some linear sum formulas of generalized Tetranacci numbers with positive subscripts.

Theorem 3. Let \(x\) be a real or complex non-zero numbers. For \( n\geq 0\) we have the following formulas:

  • (a) If \(sx^{2}+tx^{3}+ux^{4}+rx-1\neq 0\) then \begin{equation*} \sum\limits_{k=0}^{n}kx^{k}W_{k}=\frac{\Omega _{1}}{(sx^{2}+tx^{3}+ux^{4}+rx-1)^{2}} \end{equation*} where

    \(\Omega _{1}=x^{n+3}(n(sx^{2}+tx^{3}+ux^{4}+rx-1)+sx^{2}+2rx-ux^{4}-3) W_{n+3}+ x^{n+2}(n(1-rx)(sx^{2}+tx^{3}+ux^{4}+rx-1)-2+4rx-tx^{3}-2ux^{4}-2r^{2}x^{2}-rsx^{3}+rux^{5})W_{n+2}\)

    \(+x^{n+1}(-n(sx^{2}+rx-1)(sx^{2}+tx^{3}+ux^{4}+rx-1)-1+2sx^{2}-2tx^{3}-3ux^{4}-r^{2}x^{2}-s^{2}x^{4}+2rx-2rsx^{3}+rtx^{4}+2rux^{5}+ sux^{6})W_{n+1}\)

    \(+ ux^{n+4}(n(sx^{2}+tx^{3}+ux^{4}+rx-1)-4+2sx^{2}+tx^{3}+3rx)W_{n}+ x^{3}(-sx^{2}+ux^{4}-2rx+3)W_{3}+ x^{2}(tx^{3}+2ux^{4}+2r^{2}x^{2}-4rx+rsx^{3}-rux^{5}+2) W_{2} \)

    \(+ x(-2sx^{2}+2tx^{3}+3ux^{4}+r^{2}x^{2}+s^{2}x^{4}-2rx+2rsx^{3}-rtx^{4}-2rux^{5}-sux^{6}+1) W_{1}-ux^{4}(2sx^{2}+tx^{3}+3rx-4)W_{0}. \)

  • (b) If \( r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1\neq 0 \) then \begin{equation*} \sum\limits_{k=0}^{n}kx^{k}W_{2k}=\frac{\Omega _{2}}{ (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)^{2} } \end{equation*} where

    \(\Omega _{2}=x^{n+1}(-n(ux^{2}+sx-1)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) -1-s^{2}x^{2}-2t^{2}x^{3}+u^{2}x^{4}-u^{3}x^{6}+2sx-2rtx^{2}-r^{2}sx^{2}-2r^{2}ux^{3}+ \)

    \( st^{2}x^{4}-s^{2}ux^{4}-2su^{2}x^{5}-2rtux^{4}+ux^{2})W_{2n+2}+ x^{n+2}(n(t+rs+rux)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) +2rs^{2}x-t^{3}x^{3}-2rs-2t+r^{3}sx+r^{2}tx+2ru^{2}x^{3}\)

    \(+2r^{3}ux^{2}+ru^{3}x^{5}+2tu^{2}x^{4}-3rux+2stx+4rsux^{2}+2stux^{3}-rst^{2}x^{3}+rs^{2}ux^{3}+2rsu^{2} x^{4}+2r^{2}tux^{3})W_{2n+1}+u x^{n+2}(n(r+tx)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) \)

    \( +r^{3}x-2r-3tx+4stx^{2}+2tux^{3}+2r^{2}tx^{2}+rt^{2}x^{3}-s^{2}tx^{3}+2ru^{2}x^{4}+ tu^{2}x^{5}+2rsx+2rsux^{3})W_{2n-1}+ x^{n+2}(n(u+t^{2}x-u^{2}x^{2}+rt-sux)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}\)

    \(-2sux^{3}-1) +4u^{2}x^{2}-3t^{2}x-2u-2u^{3}x^{4}-2rt+2r^{2} t^{2}x^{2}-3r^{2}u^{2}x^{3}-s^{2}t^{2}x^{3}+2s^{2}u^{2}x^{4}+r^{3}tx+r^{2}ux+rt^{3}x^{3}+4st^{2}x^{2}-4s^{2}ux^{2}-6su^{2}x^{3}+ s^{3}ux^{3}\)

    \(+t^{2}ux^{3}+su^{3}x^{5}+5sux-2r^{2}sux^{2}-2rtu^{2}x^{4}+2rstx)W_{2n}+ x^{2}(2r-r^{3}x+3tx-4stx^{2}-2tux^{3}-2r^{2}tx^{2}-rt^{2}x^{3}+s^{2}tx^{3}-2ru^{2}x^{4}-tu^{2}x^{5}-2rsx-2rsux^{3}) W_{3}+ x(-2r^{2}x-ux^{2}\)

    \(+r^{4}x^{2}+s^{2}x^{2}+2t^{2}x^{3}-u^{2}x^{4}+u^{3}x^{6}-2sx+r^{2}t^{2}x^{4}+2r^{2}u^{2}x^{5}-rtx^{2}+3r^{2}sx^{2}+2r^{3}tx^{3}+2r^{2}ux^{3}-st^{2}x^{4}+s^{2}ux^{4}+2su^{2}x^{5}+4rstx^{3}+4rtux^{4}-rs^{2}tx^{4}\)

    \(+2r^{2}sux^{4}+rtu^{2}x^{6}+1) W_{2}+ x^{2}(2t+t^{3}x^{3}-r^{2}tx+4s^{2}tx^{2}-2ru^{2}x^{3}-2r^{3}ux^{2}-s^{3}tx^{3}-ru^{3}x^{5}-2tu^{2}x^{4}+3rux-5stx-4rsux^{2}+2r^{2}stx^{2}+2rst^{2}x^{3}+rs^{2}ux^{3}-2r^{2}tux^{3}+stu^{2}x^{5}) W_{1}+\)

    \( ux^{2}(-r^{2}x-4ux^{2}+4s^{2}x^{2}-s^{3}x^{3}+t^{2}x^{3}+2u^{2}x^{4}-5sx+6sux^{3}+2r^{2}sx^{2}+3r^{2}ux^{3}-2s^{2}ux^{4}-su^{2}x^{5}+t^{2}ux^{5}+2rstx^{3}+4rtux^{4}+2) W_{0}. \)

  • (c) If \( r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1\neq 0 \) then \begin{equation*} \sum\limits_{k=0}^{n}kx^{k}W_{2k+1}=\frac{\Omega _{3}}{ (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)^{2} } \end{equation*} where

    \(\Omega _{3}=+ x^{n+1}(n(r+tx)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)-t^{3}x^{4}-2tx-r-2rux^{2}+2stx^{2}+rs^{2}x^{2}-r^{2}tx^{2}-2rt^{2}x^{3}+3ru^{2}x^{4}+ 2tu^{2}x^{5}+ 4rsux^{3}+2stux^{4})W_{2n+2}\)

    \(+ x^{n+1}(n(s-s^{2}x+t^{2}x^{2}-u^{2}x^{3}+ux-2sux^{2}+rtx) (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) +2s^{2}x-s-s^{3}x^{2}-3t^{2}x^{2}+4u^{2}x^{3}-2u^{3}x^{5}-r^{2}s^{2}x^{2}\)

    \(+2r^{2}t^{2}x^{3}-3r^{2}u^{2}x^{4}+6sux^{2}+r^{3}tx^{2}+ r^{2}ux^{2}+rt^{3}x^{4}+2st^{2}x^{3}-4s^{2}ux^{3}-5su^{2}x^{4}+ t^{2}ux^{4}-2rtx-4r^{2}sux^{3}- 2rtu^{2}x^{5}-2ux-2r stux^{4})W_{2n+1}+ \)

    \(x^{n+1}(n(t+rux-stx)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)-2t^{3}x^{3}-2tux^{2}-t-2rt^{2}x^{2}-s^{2}tx^{2}+r^{3}ux^{2}+st^{3}x^{4}+2ru^{3}x^{5}+3tu^{2}x^{4}- \)

    \( 2rux+2stx+2rsux^{2}+4stux^{3}-r^{2}stx^{2}+2rsu^{2}x^{4}-rt^{2}ux^{4}-2s^{2}tux^{4}-2stu^{2}x^{5})W_{2n}+ux^{n+1}(-n(ux^{2}+sx-1)(r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)\)

    \(-1 -s^{2}x^{2}-2t^{2}x^{3}+u^{2}x^{4}-u^{3}x^{6}+2sx-r^{2}sx^{2}-2r^{2}ux^{3}+st^{2}x^{4}-s^{2}ux^{4}-2su^{2}x^{5}-2rtux^{4}+ux^{2}-2rtx^{2})W_{2n-1}+x(-ux^{2}+s^{2}x^{2}+2t^{2}x^{3}-u^{2}x^{4}\)

    \(+u^{3}x^{6}-2sx+2rtx^{2}+r^{2}sx^{2}+2r^{2}ux^{3}-st^{2}x^{4}+s^{2}ux^{4}+2su^{2}x^{5}+2rtux^{4}+1) W_{3}+ x^{2}(2t+t^{3}x^{3}+2rs-2rs^{2}x-r^{3}sx-r^{2}tx-2ru^{2}x^{3}-2r^{3}ux^{2}\)

    \(-ru^{3}x^{5}-2tu^{2}x^{4}+3rux-2stx-4rsux^{2}-2stux^{3}+rst^{2}x^{3}-rs^{2}ux^{3}-2rsu^{2}x^{4}-2r^{2}tux^{3}) W_{2}+ x^{2}(2u+3t^{2}x-4u^{2}x^{2}+2u^{3}x^{4}+2rt-2r^{2}t^{2}x^{2}+3r^{2}u^{2}x^{3}\)

    \(+s^{2}t^{2}x^{3}-2s^{2}u^{2}x^{4}-r^{3}tx-r^{2}ux-rt^{3}x^{3}-4st^{2}x^{2}+4s^{2}ux^{2}+6su^{2}x^{3}-s^{3}ux^{3}-t^{2}ux^{3}-su^{3}x^{5}-5sux+2r^{2}sux^{2}+2rtu^{2}x^{4}-2rstx) W_{1}+ \)

    \( ux^{2}(2r-r^{3}x+3tx-4stx^{2}-2tux^{3}-2r^{2}tx^{2}-rt^{2}x^{3}+s^{2}tx^{3}-2ru^{2}x^{4}-tu^{2}x^{5}-2rsx-2rsux^{3}) W_{0}. \)

Proof.

  • (a) Using the recurrence relation \begin{equation*} W_{n}=rW_{n-1}+sW_{n-2}+tW_{n-3}+uW_{n-4}\,, \end{equation*} i.e., \begin{equation*} uW_{n-4}=W_{n}-rW_{n-1}-sW_{n-2}-tW_{n-3}\,, \end{equation*} we obtain \begin{eqnarray*} u\times 0\times x^{0}W_{0} &=&0\times x^{0}W_{4}-r\times 0\times x^{0}W_{3}-s\times 0\times x^{0}W_{2}-t\times 0\times x^{0}W_{1} , \end{eqnarray*} \begin{eqnarray*} u\times 1\times x^{1}W_{1} &=&1\times x^{1}W_{5}-r\times 1\times x^{1}W_{4}-s\times 1\times x^{1}W_{3}-t\times 1\times x^{1}W_{2} , \end{eqnarray*} \begin{eqnarray*} u\times 2\times x^{2}W_{2} &=&2\times x^{2}W_{6}-r\times 2\times x^{2}W_{5}-s\times 2\times x^{2}W_{4}-t\times 2\times x^{2}W_{3} , \end{eqnarray*} \begin{eqnarray*} u\times 3\times x^{3}W_{3} &=&3\times x^{3}W_{7}-r\times 3\times x^{3}W_{6}-s\times 3\times x^{3}W_{5}-t\times 3\times x^{3}W_{4} , \end{eqnarray*} \begin{eqnarray*} &&\vdots \end{eqnarray*} \begin{eqnarray*} u(n-4)x^{n-4}W_{n-4} &=&(n-4)x^{n-4}W_{n}-r(n-4)x^{n-4}W_{n-1}-s(n-4)x^{n-4}W_{n-2}-t(n-4)x^{n-4}W_{n-3}, \end{eqnarray*} \begin{eqnarray*} u(n-3)x^{n-3}W_{n-3} &=&(n-3)x^{n-3}W_{n+1}-r(n-3)x^{n-3}W_{n}-s(n-3)x^{n-3}W_{n-1}-t(n-3)x^{n-3}W_{n-2}, \end{eqnarray*} \begin{eqnarray*} u(n-2)x^{n-2}W_{n-2} &=&(n-2)x^{n-2}W_{n+2}-r(n-2)x^{n-2}W_{n+1}-s(n-2)x^{n-2}W_{n}-t(n-2)x^{n-2}W_{n-1}, \end{eqnarray*} \begin{eqnarray*} u(n-1)x^{n-1}W_{n-1} &=&(n-1)x^{n-1}W_{n+3}-r(n-1)x^{n-1}W_{n+2}-s(n-1)x^{n-1}W_{n+1}-t(n-1)x^{n-1}W_{n}, \end{eqnarray*}\begin{eqnarray*} u\times n\times x^{n}W_{n} &=&u\times n\times x^{n}W_{n+4}-ru\times n\times x^{n}W_{n+3}-su\times n\times x^{n}W_{n+2}-tu\times n\times x^{n}W_{n+1}. \end{eqnarray*} If we add the equations side by side we get

    \( u\sum\limits_{k=0}^{n}kx^{k}W_{k}=(nx^{n}W_{n+4}+(n-1)x^{n-1}W_{n+3}+(n-2)x^{n-2}W_{n+2}+(n-3)x^{n-3}W_{n+1}-(-1)x^{-1}W_{3}-(-2)x^{-2}W_{2}\)

    \(-(-3)x^{-3}W_{1}-(-4)x^{-4}W_{0}+\sum\limits_{k=0}^{n}kx^{k-4}W_{k}-4\sum\limits_{k=0}^{n}x^{k-4}W_{k}) -r(nx^{n}W_{n+3}+(n-1)x^{n-1}W_{n+2}+(n-2)x^{n-2}W_{n+1}\)

    \(-(-1)x^{-1}W_{2}-(-2)x^{-2}W_{1}-(-3)x^{-3}W_{0}+\sum\limits_{k=0}^{n}kx^{k-3}W_{k}-3\sum\limits_{k=0}^{n}x^{k-3}W_{k}) -s(nx^{n}W_{n+2}+(n-1)x^{n-1}W_{n+1}\)

    \(-(-1)x^{-1}W_{1}-(-2)x^{-2}W_{0}+ \sum\limits_{k=0}^{n}kx^{k-2}W_{k}-2\sum\limits_{k=0}^{n}x^{k-2}W_{k}) -t(nx^{n}W_{n+1}-(-1)x^{-1}W_{0}+\sum\limits_{k=0}^{n}kx^{k-1}W_{k}- \sum\limits_{k=0}^{n}x^{k-1}W_{k}).\)

    Then if we denote \(\sum\limits_{k=0}^{n}x^{k}W_{k}\) and \(\sum\limits_{k=0}^{n}kx^{k}W_{k}\) as \begin{eqnarray*} A &=&\sum\limits_{k=0}^{n}x^{k}W_{k}, \\ a &=&\sum\limits_{k=0}^{n}kx^{k}W_{k}, \end{eqnarray*} and use \begin{equation*} W_{n+4}=rW_{n+3}+sW_{n+2}+tW_{n+1}+uW_{n}, \end{equation*} we obtain

    \( ua=(nx^{n}(rW_{n+3}+sW_{n+2}+tW_{n+1}+uW_{n})+(n-1)x^{n-1}W_{n+3}+(n-2)x^{n-2}W_{n+2}+(n-3)x^{n-3}W_{n+1}-(-1)x^{-1}W_{3}-(-2)x^{-2}W_{2}-(-3)x^{-3}W_{1}-(-4)x^{-4}W_{0}\)

    \(+x^{-4}a-4x^{-4}A) -r(nx^{n}W_{n+3}+(n-1)x^{n-1}W_{n+2}+(n-2)x^{n-2}W_{n+1}-(-1)x^{-1}W_{2}-(-2)x^{-2}W_{1}-(-3)x^{-3}W_{0}+x^{-3}a-3x^{-3}A) \)

    \(-s(nx^{n}W_{n+2}+(n-1)x^{n-1}W_{n+1}-(-1)x^{-1}W_{1}-(-2)x^{-2}W_{0}+x^{-2}a-2x^{-2}A) -t(nx^{n}W_{n+1}-(-1)x^{-1}W_{0}+x^{-1}a-x^{-1}A).\)

    Using Theorem 1 (a) and solving the last equation for \(a\), we get (a).
  • (b) and (c) Using the recurrence relation \begin{equation*} W_{n}=rW_{n-1}+sW_{n-2}+tW_{n-3}+uW_{n-4} \end{equation*} i.e., \begin{equation*} rW_{n-1}=W_{n}-sW_{n-2}-tW_{n-3}-uW_{n-4}\,, \end{equation*} we obtain \begin{eqnarray*} r\times 1\times x^{1}W_{3} &=&1\times x^{1}W_{4}-s\times 1\times x^{1}W_{2}-t\times 1\times x^{1}W_{1}-u\times 1\times x^{1}W_{0} ,\\ r\times 2\times x^{2}W_{5} &=&2\times x^{2}W_{6}-s\times 2\times x^{2}W_{4}-t\times 2\times x^{2}W_{3}-u\times 2\times x^{2}W_{2} ,\\ r\times 3\times x^{3}W_{7} &=&3\times x^{3}W_{8}-s\times 3\times x^{3}W_{6}-t\times 3\times x^{3}W_{5}-u\times 3\times x^{3}W_{4} ,\\ r\times 4\times rx^{4}W_{9} &=&4\times rx^{4}W_{10}-s\times 4\times rx^{4}W_{8}-t\times 4\times rx^{4}W_{7}-u\times 4\times rx^{4}W_{6}, \\ &&\vdots \\ r(n-1)x^{n-1}W_{2n-1} &=&(n-1)x^{n-1}W_{2n}-s(n-1)x^{n-1}W_{2n-2} -t(n-1)x^{n-1}W_{2n-3}-u(n-1)x^{n-1}W_{2n-4} ,\\ rnx^{n}W_{2n+1} &=&nx^{n}W_{2n+2}-snx^{n}W_{2n}-tnx^{n}W_{2n-1}-unx^{n}W_{2n-2}. \end{eqnarray*} Now, if we add the above equations side by side, we get \begin{align*} r(-0&\times x^{0}W_{1}+\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}) =(nx^{n}W_{2n+2}-0\times x^{0}W_{2}-(-1)x^{-1}W_{0}+\sum\limits_{k=0}^{n}(k-1)x^{k-1}W_{2k}) -s(-0\times x^{0}W_{0}\\ &+\sum\limits_{k=0}^{n}kx^{k}W_{2k}) -t(-(n+1)x^{n+1}W_{2n+1}+\sum\limits_{k=0}^{n}(k+1)x^{k+1}W_{2k+1})-u(-(n+1)x^{n+1}W_{2n}+\sum\limits_{k=0}^{n}(k+1)x^{k+1}W_{2k})\,, \end{align*} and so
    \begin{align}\label{equat:ufsdmnb} r(-0&\times x^{0}W_{1}+\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}) =(nx^{n}W_{2n+2}-0\times x^{0}W_{2}-(-1)x^{-1}W_{0} +x^{-1}\sum\limits_{k=0}^{n}kx^{k}W_{2k}-x^{-1}\sum\limits_{k=0}^{n}x^{k}W_{2k}) \notag \\ &-s(-0\times x^{0}W_{0}+\sum\limits_{k=0}^{n}kx^{k}W_{2k}) -t(-(n+1)x^{n+1}W_{2n+1}+x^{1}\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}+x^{1} \sum\limits_{k=0}^{n}x^{k}W_{2k+1}) \notag\\ &-u(-(n+1)x^{n+1}W_{2n}+x^{1}\sum\limits_{k=0}^{n}kx^{k}W_{2k}+x^{1} \sum\limits_{k=0}^{n}x^{k}W_{2k}). \end{align}
    (2)
    Similarly, using the recurrence relation \begin{equation*} W_{n}=rW_{n-1}+sW_{n-2}+tW_{n-3}+uW_{n-4}\,, \end{equation*} i.e., \begin{equation*} rW_{n-1}=W_{n}-sW_{n-2}-tW_{n-3}-uW_{n-4}\,, \end{equation*} we write the following obvious equations; \begin{eqnarray*} r\times 1\times x^{1}W_{2} &=&1\times x^{1}W_{3}-s\times 1\times x^{1}W_{1}-t\times 1\times x^{1}W_{0}-u\times 1\times x^{1}W_{-1} ,\\ r\times 2\times x^{2}W_{4} &=&2\times x^{2}W_{5}-s\times 2\times x^{2}W_{3}-t\times 2\times x^{2}W_{2}-u\times 2\times x^{2}W_{1} ,\\ r\times 3\times x^{3}W_{6} &=&3\times x^{3}W_{7}-s\times 3\times x^{3}W_{5}-t\times 3\times x^{3}W_{4}-u\times 3\times x^{3}W_{3} ,\\ r\times 8\times x^{4}W_{8} &=&4\times x^{4}W_{9}-s\times 8\times x^{4}W_{7}-t\times 8\times x^{4}W_{6}-u\times 8\times x^{4}W_{5} ,\\ &&\vdots \\ r(n-1)x^{n-1}W_{2n-2} &=&(n-1)x^{n-1}W_{2n-1}-s(n-1)x^{n-1}W_{2n-3} -t(n-1)x^{n-1}W_{2n-4}-u(n-1)x^{n-1}W_{2n-5} ,\\ rnx^{n}W_{2n} &=&nx^{n}W_{2n+1}-snx^{n}W_{2n-1}-tnx^{n}W_{2n-2}-unx^{n}W_{2n-3}, \\ r(n+1)x^{n+1}W_{2n+2} &=&(n+1)x^{n+1}W_{2n+3}-s(n+1)x^{n+1}W_{2n+1} -t(n+1)x^{n+1}W_{2n}-u(n+1)x^{n+1}W_{2n-1}. \end{eqnarray*} Now, if we add the above equations side by side, we obtain \begin{align*} r(-0&\times x^{0}W_{0}+\sum\limits_{k=0}^{n}kx^{k}W_{2k}) =(-0\times x^{0}W_{1}+\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}) -s(-(n+1)x^{n+1}W_{2n+1}+\sum\limits_{k=0}^{n}(k+1)x^{k+1}W_{2k+1}) \\ &-t(-(n+1)x^{n+1}W_{2n}+\sum\limits_{k=0}^{n}(k+1)x^{k+1}W_{2k}) -u(-(n+2)x^{n+2}W_{2n+1}-(n+1)x^{n+1}W_{2n-1} +1\times x^{1}W_{-1}\\&+\sum\limits_{k=0}^{n}(k+2)x^{k+2}W_{2k+1}). \end{align*} Since \begin{equation*} W_{-1}=-\frac{t}{u}W_{0}-\frac{s}{u}W_{1}-\frac{r}{u}W_{2}+\frac{1}{u}W_{3}\,, \end{equation*} we have
    \begin{align}\label{equat:senbangenb} r(-0&\times x^{0}W_{0}+\sum\limits_{k=0}^{n}kx^{k}W_{2k}) =(-0\times x^{0}W_{1}+\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}) -s(-(n+1)x^{n+1}W_{2n+1}+x^{1}\sum\limits_{k=0}^{n}kx^{k}W_{2k+1} \notag \\ &+x^{1} \sum\limits_{k=0}^{n}x^{k}W_{2k+1})-t(-(n+1)x^{n+1}W_{2n}+x^{1}\sum\limits_{k=0}^{n}kx^{k}W_{2k}+x^{1} \sum\limits_{k=0}^{n}x^{k}W_{2k}) -u(-(n+2)x^{n+2}W_{2n+1} \notag \\ &-(n+1)x^{n+1}W_{2n-1}+1\times x^{1}(-\frac{t}{u}W_{0}-\frac{s}{u}W_{1}-\frac{r}{u}W_{2}+\frac{1 }{u}W_{3}) +x^{2}\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}+2x^{2}\sum\limits_{k=0}^{n}x^{k}W_{2k+1}). \end{align}
    (3)
    Then, solving the system (2)-(3) (using Theorem 1 (b) and (c)), the required result of (b) and (c) follow. In fact, if we denote \begin{eqnarray*} a &=&\sum\limits_{k=0}^{n}kx^{k}W_{2k}, \end{eqnarray*}\begin{eqnarray*} b &=&\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}, \\ f &=&\sum\limits_{k=0}^{n}x^{k}W_{2k}, \\ g &=&\sum\limits_{k=0}^{n}x^{k}W_{2k+1}, \end{eqnarray*} (2) and (3) can be written as follows: \begin{align*} r(-0&\times x^{0}W_{1}+b) =(nx^{n}W_{2n+2}-0\times x^{0}W_{2}-(-1)x^{-1}W_{0}+x^{-1}a-x^{-1}f) -s(-0\times x^{0}W_{0}+a)\\ &-t(-(n+1)x^{n+1}W_{2n+1}+x^{1}b+x^{1}g) -u(-(n+1)x^{n+1}W_{2n}+x^{1}a+x^{1}f)\,, \\ r(-0&\times x^{0}W_{0}+a) =(-0\times x^{0}W_{1}+b) -s(-(n+1)x^{n+1}W_{2n+1}+x^{1}b+x^{1}g)-t(-(n+1)x^{n+1}W_{2n}+x^{1}a+x^{1}f) \\ &-u(-(n+2)x^{n+2}W_{2n+1}-(n+1)x^{n+1}W_{2n-1} +1\times x^{1}(-\frac{t}{u}W_{0}-\frac{s}{u}W_{1}-\frac{r}{u}W_{2}+\frac{1 }{u}W_{3})+x^{2}b+2x^{2}g)\,. \end{align*} Using Theorem 1 (b) and (c) and solving the last two simultaneous equations with respect to \(a\) and \(b\), we get (b) and (c).

Remark 1. Note that the proof of Theorem 3 can be done by taking the derivative of the formulas in Theorem 1. In fact, since \begin{eqnarray*} \sum\limits_{k=0}^{n}x^{k}W_{k} &=&\frac{\Theta _{1}(x)}{rx+sx^{2}+tx^{3}+ux^{4}-1}, \\ \sum\limits_{k=0}^{n}x^{k}W_{2k} &=&\frac{\Theta _{2}(x)}{ r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1}, \\ \sum\limits_{k=0}^{n}x^{k}W_{2k+1} &=&\frac{\Theta _{3}(x)}{ r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1}, \end{eqnarray*} by taking the derivative of the both sides of the above formulas with respect to \(x\), we get \begin{eqnarray*} \sum\limits_{k=0}^{n}kx^{k-1}W_{k} &=&\frac{(rx+sx^{2}+tx^{3}+ux^{4}-1)\Theta _{1}^{^{\prime }}(x)-(4ux^{3}+3tx^{2}+2sx+r)\Theta _{1}(x)}{ (rx+sx^{2}+tx^{3}+ux^{4}-1)^{2}}, \\ \sum\limits_{k=0}^{n}kx^{k-1}W_{2k} &=&\frac{ \begin{array}{c} (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) \Theta _{2}^{^{\prime }}(x) \\ -(r^{2}+4rtx-2s^{2}x-6sux^{2}+2s+3t^{2}x^{2}-4u^{2} x^{3}+4ux)\Theta _{2}(x) \end{array} }{ (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)^{2} }, \\ \sum\limits_{k=0}^{n}kx^{k-1}W_{2k+1} &=&\frac{ \begin{array}{c} (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) \Theta _{3}^{^{\prime }}(x) \\ -(r^{2}+4rtx-2s^{2}x-6sux^{2}+2s+3t^{2}x^{2}-4u^{2} x^{3}+4ux)\Theta _{3}(x) \end{array} }{ (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)^{2} }, \end{eqnarray*} i.e., \begin{eqnarray*} \sum\limits_{k=0}^{n}kx^{k}W_{k} &=&x\frac{(rx+sx^{2}+tx^{3}+ux^{4}-1)\Theta _{1}^{^{\prime }}(x)-(4ux^{3}+3tx^{2}+2sx+r)\Theta _{1}(x)}{ (rx+sx^{2}+tx^{3}+ux^{4}-1)^{2}}, \\ \sum\limits_{k=0}^{n}kx^{k}W_{2k} &=&x\frac{ \begin{array}{c} (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) \Theta _{2}^{^{\prime }}(x) \\ -(r^{2}+4rtx-2s^{2}x-6sux^{2}+2s+3t^{2}x^{2}-4u^{2} x^{3}+4ux)\Theta _{2}(x) \end{array} }{ (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)^{2} }, \\ \sum\limits_{k=0}^{n}kx^{k}W_{2k+1} &=&x\frac{ \begin{array}{c} (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1) \Theta _{3}^{^{\prime }}(x) \\ -(r^{2}+4rtx-2s^{2}x-6sux^{2}+2s+3t^{2}x^{2}-4u^{2} x^{3}+4ux)\Theta _{3}(x) \end{array} }{ (r^{2}x+2ux^{2}-s^{2}x^{2}+t^{2}x^{3}-u^{2}x^{4}+2sx+2rtx^{2}-2sux^{3}-1)^{2} }, \end{eqnarray*} where \(\Theta _{1}^{^{\prime }}(x),\) \(\Theta _{2}^{^{\prime }}(x)\) and \( \Theta _{3}^{^{\prime }}(x)\) denotes the derivatives of \(\Theta _{1}(x),\) \( \Theta _{2}(x)\) and \(\Theta _{1}(x)\) respectively\(.\)

3. Special Cases

In this section, for the special cases of \(x,\) we present the closed form solutions (identities) of the sums \(\sum\limits_{k=0}^{n}kx^{k}W_{k},\) \( \sum\limits_{k=0}^{n}kx^{k}W_{2k}\) and \(\sum\limits_{k=0}^{n}kx^{k}W_{2k+1}\) for the specific case of sequence \(\{W_{n}\}.\)

3.1. The case \(x=1\)

In this subsection we consider the special case \(x=1\).

The case \(x=1\) of Theorem 3 is given in Soykan [34].

We only consider the case \(x=1,r=1,s=1,t=1,u=2\) (which is not considered in [34]).

Observe that setting \(x=1,r=1,s=1,t=1,u=2\) (i.e., for the generalized fourth order Jacobsthal sequence case) in Theorem 3 (b), (c) makes the right hand side of the sum formulas to be an indeterminate form. Application of L'Hospital rule (twice) however provides the evaluation of the sum formulas.

Theorem 4. If \(r=1,s=1,t=1,u=2\) then for \(n\geq 0\) we have the following formulas:

  • (a) \(\sum\limits_{k=0}^{n}kW_{k}=\frac{1}{16}((4n-2)W_{n+3}-4W_{n+2}- (4n+2)W_{n+1}+2(4n+2)W_{n}+2W_{3}+4W_{2}+2W_{1}-4W_{0}).\)
  • (b) \(\sum\limits_{k=0}^{n}kW_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)W_{2n+2}+8 \left( -6n^{2}+4n+159\right) W_{2n+1}+4(6n^{2}+44n-151)W_{2n}+8(-6n^{2}+4n+159)W_{2n-1} -636W_{3}+1312W_{2}-636W_{1}+1240W_{0}).\)
  • (c) \(\sum\limits_{k=0}^{n}kW_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)W_{2n+2} +16(3n^{2}+13n-80)W_{2n+1}+4(-6n^{2}+16n+149)W_{2n}+8(6n^{2}+8n-169)W_{2n-1}+676W_{3}+604W_{1}-1272W_{2}-1272W_{0}). \)

Proof.

  • (a) We use Theorem 3 (a). If we set \( x=1,r=1,s=1,t=1,u=2\) in Theorem 3 (a) we get (a).
  • (b) We use Theorem 3 (b). If we set \( r=1,s=1,t=1,u=2\) in Theorem 3 (b) then we have \begin{equation*} \sum\limits_{k=0}^{n}kx^{k}W_{2k}=\frac{g_{1}(x)}{(4x^{4}+3x^{3}-5x^{2}-3x+1)^{2}}\,, \end{equation*} where

    \(g_{1}(x)=- x^{n+1}(2x^{2}-2x+6x^{3}+x^{4}+8x^{5}+8x^{6}-n(2x^{2}+x-1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n+2}\)

    \(+x^{n+2}(12x^{2}+16x^{3}+16x^{4}+8x^{5}-n(2x+2)(4x^{4}+3x^{3}-5x^{2}-3x+1)-4)W_{2n+1}+ x^{n+2}(12x+10x^{2}-32x^{3}-16x^{4}+8x^{5}+n(4x^{2}+x-3)(4x^{4}+3x^{3}-5x^{2}-3x+1)-6)W_{2n}\)

    \(+ 2x^{n+2}(6x^{2}+8x^{3}+8x^{4}+4x^{5}-n(x+1)(4x^{4}+3x^{3}-5x^{2}-3x+1)-2)W_{2n-1}-x^{2}(4x^{5}+8x^{4}+8x^{3}+6x^{2}-2)W_{3}\)

    \(+x (12x^{6}+16x^{5}+9x^{4}+12x^{3}+2x^{2}-4x+1)W_{2}- x^{2}(4x^{5}+8x^{4}+8x^{3}+6x^{2}-2)W_{1}- 2x^{2}(2x^{5}-12x^{4}-20x^{3}+2x^{2}+6x-2)W_{0}. \)

    For \(x=1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (twice). Then we get (b) using

    \( \sum\limits_{k=0}^{n}kW_{2k} =\left. \frac{\frac{d^{2}}{dx^{2}}\left( g_{1}(x)\right) }{\frac{d^{2}}{dx^{2}}\left( (4x^{4}+3x^{3}-5x^{2}-3x+1)^{2}\right) }\right\vert _{x=1} =\frac{1}{288}(4(6n^{2}+8n-169)W_{2n+2}+8\left( -6n^{2}+4n+159\right) W_{2n+1} \)

    \(+4(6n^{2}+44n-151)W_{2n}+8(-6n^{2}+4n+159)W_{2n-1} -636W_{3}+1312W_{2}-636W_{1}+1240W_{0})\,. \)

  • (c) We use Theorem 3 (c). If we set \( r=1,s=1,t=1,u=2\) in Theorem 3 (c) then we have \begin{equation*} \sum\limits_{k=0}^{n}kx^{k}W_{2k+1}=\frac{g_{2}(x)}{(4x^{4}+3x^{3}-5x^{2}-3x+1)^{2}}\,, \end{equation*} where

    \(g_{2}(x)=-x^{n+1} (2x+2x^{2}-6x^{3}-15x^{4}-8x^{5}+n(x+1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n+2}- x^{n+1}(4x-10x^{2}-4x^{3}+33x^{4}+24x^{5}\)

    \(-n(4x^{3}+3x^{2}-2x-1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n+1}- x^{n+1}W_{2n}(2x+2x^{2}-6x^{3}-15x^{4}-8x^{5}+n(x+1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)- 2x^{n+1}(2x^{2}-2x+6x^{3}+x^{4}\)

    \(+8x^{5}+8x^{6}-n(2x^{2}+x-1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n-1}+x(8x^{6}+8x^{5}+x^{4}+6x^{3}+2x^{2}-2x+1)W_{3}-x^{2}(8x^{5}+16x^{4}+16x^{3}+12x^{2}-4)W_{2}\)

    \(-x^{2} (8x^{5}-16x^{4}-32x^{3}+10x^{2}+12x-6)W_{1}- 2x^{2}(4x^{5}+8x^{4}+8x^{3}+6x^{2}-2)W_{0}. \)

    For \(x=1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (twice). Then we get (c) using

    \( \sum\limits_{k=0}^{n}kW_{2k+1}=\left.\frac{\frac{d^{2}}{dx^{2}}\left(g_{2}(x)\right)}{\frac{d^{2}}{dx^{2}}\left((4x^{4}+3x^{3}-5x^{2}-3x+1)^{2}\right) }\right\vert_{x=1} =\frac{1}{288}(4(-6n^{2}+16n+149)W_{2n+2} \)

    \(+16(3n^{2}+13n-80)W_{2n+1} +4(-6n^{2}+16n+149)W_{2n}+8(6n^{2}+8n-169)W_{2n-1}+676W_{3}+604W_{1}-1272W_{2}-1272W_{0}). \)

Taking \(W_{n}=J_{n}\) with \(J_{0}=0,J_{1}=1,J_{2}=1,J_{3}=1\) in the last theorem, we have the following corollary which presents linear sum formulas of the fourth-order Jacobsthal numbers.

Corollary 1. For \(n\geq 0,\) fourth order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}kJ_{k}=\frac{1}{16}((4n-2)J_{n+3}-4J_{n+2}- (4n+2)J_{n+1}+2(4n+2)J_{n}+8).\)
  • (b) \(\sum\limits_{k=0}^{n}kJ_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)J_{2n+2}+8 \left( -6n^{2}+4n+159\right) J_{2n+1}+4(6n^{2}+44n-151)J_{2n}+8(-6n^{2}+4n+159)J_{2n-1} +40).\)
  • (c) \(\sum\limits_{k=0}^{n}kJ_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)J_{2n+2} +16(3n^{2}+13n-80)J_{2n+1}+4(-6n^{2}+16n+149)J_{2n}+8(6n^{2}+8n-169)J_{2n-1}+8). \)

From the last theorem, we have the following corollary which gives linear sum formula of the fourth order Jacobsthal-Lucas numbers (take \(W_{n}=j_{n}\) with \(j_{0}=2,j_{1}=1,j_{2}=5,j_{3}=10\)).

Corollary 2. For \(n\geq 0,\) fourth order Jacobsthal-Lucas numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}kj_{k}=\frac{1}{16}((4n-2)j_{n+3}-4j_{n+2}- (4n+2)j_{n+1}+2(4n+2)j_{n}+34).\)
  • (b) \(\sum\limits_{k=0}^{n}kj_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)j_{2n+2}+8 \left( -6n^{2}+4n+159\right) j_{2n+1}+4(6n^{2}+44n-151)j_{2n}+8(-6n^{2}+4n+159)j_{2n-1} +2044). \)
  • (c) \(\sum\limits_{k=0}^{n}kj_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)j_{2n+2} +16(3n^{2}+13n-80)j_{2n+1}+4(-6n^{2}+16n+149)j_{2n}+8(6n^{2}+8n-169)j_{2n-1}-1540). \)

Taking \(W_{n}=K_{n}\) with \(K_{0}=3,K_{1}=1,K_{2}=3,K_{3}=10\) in the last theorem, we have the following corollary which presents linear sums formula of the modified fourth order Jacobsthal numbers.

Corollary 3. For \(n\geq 0,\)modified fourth order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}kK_{k}=\frac{1}{16}((4n-2)K_{n+3}-4K_{n+2}- (4n+2)K_{n+1}+2(4n+2)K_{n}+22).\)
  • (b) \(\sum\limits_{k=0}^{n}kK_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)K_{2n+2}+8 \left( -6n^{2}+4n+159\right) K_{2n+1}+4(6n^{2}+44n-151)K_{2n}+8(-6n^{2}+4n+159)K_{2n-1} +660).\)
  • (c) \(\sum\limits_{k=0}^{n}kK_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)K_{2n+2} +16(3n^{2}+13n-80)K_{2n+1}+4(-6n^{2}+16n+149)K_{2n}+8(6n^{2}+8n-169)K_{2n-1}-268). \)

From the last theorem, we have the following corollary which gives linear sums formula of the fourth-order Jacobsthal Perrin numbers (take \( W_{n}=Q_{n} \) with \(Q_{0}=3,Q_{1}=0,Q_{2}=2,Q_{3}=8\)).

Corollary 4. For \(n\geq 0,\) fourth-order Jacobsthal Perrin numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}kQ_{k}=\frac{1}{16}((4n-2)Q_{n+3}-4Q_{n+2}- (4n+2)Q_{n+1}+2(4n+2)Q_{n}+12).\)
  • (b) \(\sum\limits_{k=0}^{n}kQ_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)Q_{2n+2}+8 \left( -6n^{2}+4n+159\right) Q_{2n+1}+4(6n^{2}+44n-151)Q_{2n}+8(-6n^{2}+4n+159)Q_{2n-1} +1256). \)
  • (c) \(\sum\limits_{k=0}^{n}kQ_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)Q_{2n+2} +16(3n^{2}+13n-80)Q_{2n+1}+4(-6n^{2}+16n+149)Q_{2n}+8(6n^{2}+8n-169)Q_{2n-1}-952). \)

Taking \(W_{n}=S_{n}\) with \(S_{0}=0,S_{1}=1,S_{2}=1,S_{3}=2\) in the theorem, we have the following corollary which presents linear sum formula of the adjusted fourth-order Jacobsthal numbers.

Corollary 5. For \(n\geq 0,\) adjusted fourth-order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}kS_{k}=\frac{1}{16}((4n-2)S_{n+3}-4S_{n+2}- (4n+2)S_{n+1}+2(4n+2)S_{n}+10).\)
  • (b) \(\sum\limits_{k=0}^{n}kS_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)S_{2n+2}+8 \left( -6n^{2}+4n+159\right) S_{2n+1}+4(6n^{2}+44n-151)S_{2n}+8(-6n^{2}+4n+159)S_{2n-1} -596).\)
  • (c) \(\sum\limits_{k=0}^{n}kS_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)S_{2n+2} +16(3n^{2}+13n-80)S_{2n+1}+4(-6n^{2}+16n+149)S_{2n}+8(6n^{2}+8n-169)S_{2n-1}+684). \)

From the last theorem, we have the following corollary which gives linear sum formulas of the modified fourth-order Jacobsthal-Lucas numbers (take \( W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\)).

Corollary 6. For \(n\geq 0,\) modified fourth-order Jacobsthal-Lucas numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}kR_{k}=\frac{1}{16}((4n-2)R_{n+3}-4R_{n+2}- (4n+2)R_{n+1}+2(4n+2)R_{n}+12).\)
  • (b) \(\sum\limits_{k=0}^{n}kR_{2k}=\frac{1}{288}(4(6n^{2}+8n-169)R_{2n+2}+8 \left( -6n^{2}+4n+159\right) R_{2n+1}+4(6n^{2}+44n-151)R_{2n}+8(-6n^{2}+4n+159)R_{2n-1} +3808). \)
  • (c) \(\sum\limits_{k=0}^{n}kR_{2k+1}=\frac{1}{288}(4(-6n^{2}+16n+149)R_{2n+2} +16(3n^{2}+13n-80)R_{2n+1}+4(-6n^{2}+16n+149)R_{2n}+8(6n^{2}+8n-169)R_{2n-1}-3568). \)

3.2. The case \(x=-1\)

In this subsection we consider the special case \(x=-1\) and we present the closed form solutions (identities) of the sums \( \sum\limits_{k=0}^{n}k(-1)^{k}kW_{k},\) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k}\) and \( \sum\limits_{k=0}^{n}k(-1)^{k}W_{2k+1}\) for the specific case of the sequence \( \{W_{n}\}.\)

Taking \(x=-1,r=s=t=u=1\) in Theorem 3 (a), (b) and (c), we obtain the following proposition.

Proposition 1. If \(x=-1,r=s=t=u=1\) then for \(n\geq 0\) we have the following formulas:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{k}=\left( -1\right) ^{n}((n+5)W_{n+3}-(2n+9)W_{n+2}+(n+2)W_{n+1}-(n+6)W_{n})-5W_{3}+9W_{2}-2W_{1}+6W_{0}. \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k}=\left( -1\right) ^{n}((n+2)W_{2n+2}- (n+3)W_{2n+1}-(n+2)W_{2n}+W_{2n-1})-W_{3}-W_{2}+4W_{1}+3W_{0}.\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k+1}=\left( -1\right) ^{n}(-W_{2n+2}+ (n+3)W_{2n}+(n+2)W_{2n-1})-2W_{3}+3W_{2}+2W_{1}-W_{0}.\)

From the above proposition, we have the following corollary which gives linear sum formulas of Tetranacci numbers (take \(W_{n}=M_{n}\) with \( M_{0}=0,M_{1}=1,M_{2}=1,M_{3}=2\)).

Corollary 7. For \(n\geq 0,\) Tetranacci numbers have the following properties.

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}M_{k}=\left( -1\right) ^{n}((n+5)M_{n+3}-(2n+9)M_{n+2}+(n+2)M_{n+1}-(n+6)M_{n})-3.\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}M_{2k}= \left( -1\right) ^{n}((n+2)M_{2n+2}- (n+3)M_{2n+1}-(n+2)M_{2n}+M_{2n-1})+1.\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}M_{2k+1}=\left( -1\right) ^{n}(-M_{2n+2}+ (n+3)M_{2n}+(n+2)M_{2n-1})+1.\)

Taking \(W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\) in the above proposition, we have the following corollary which presents linear sum formulas of Tetranacci-Lucas numbers.

Corollary 8. For \(n\geq 0,\) Tetranacci-Lucas numbers have the following properties.

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}R_{k}=\left( -1\right) ^{n}((n+5)R_{n+3}-(2n+9)R_{n+2}+(n+2)R_{n+1}-(n+6)R_{n})+14.\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}R_{2k}=\left( -1\right) ^{n}((n+2)R_{2n+2}- (n+3)R_{2n+1}-(n+2)R_{2n}+R_{2n-1})+6.\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}R_{2k+1}=\left( -1\right) ^{n}(-R_{2n+2}+ (n+3)R_{2n}+(n+2)R_{2n-1})-7.\)

Taking \(x=-1,r=2,s=t=u=1\) in Theorem 3 (a), (b) and (c), we obtain the following proposition.

Proposition 2. If \(x=-1,r=2,s=t=u=1\) then for \(n\geq 0\) we have the following formulas:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{k}=\frac{1}{4}(\left( -1\right) ^{n}((2n+7)W_{n+3}-(6n+19) W_{n+2}+(4n+6) W_{n+1}-(2n+9)W_{n})-7W_{3}+19W_{2}-6W_{1}+9W_{0}).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k}=\frac{1}{4}(\left( -1\right) ^{n}((2n+3) W_{2n+2}-(2n+3)W_{2n+1}-(4n+10)W_{2n}-(2n+5)W_{2n-1})+5W_{3}-13W_{2}-2W_{1}+5W_{0}). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k+1}=\frac{1}{4}(\left( -1\right) ^{n}((2n+3)W_{2n+2}-(2n+7)W_{2n+1}-2W_{2n}+(2n+3)W_{2n-1})-3W_{3}+3W_{2}+10W_{1}+5W_{0}). \)

From the last proposition, we have the following corollary which gives linear sum formulas of the fourth-order Pell numbers (take \(W_{n}=P_{n}\) with \(P_{0}=0,P_{1}=1,P_{2}=2,P_{3}=5\)).

Corollary 9. For \(n\geq 0,\) fourth-order Pell numbers have the following properties:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}P_{k}=\frac{1}{4}(\left( -1\right) ^{n}((2n+7)P_{n+3}-(6n+19) P_{n+2}+(4n+6) P_{n+1}-(2n+9)P_{n})-3).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}P_{2k}=\frac{1}{4}(\left( -1\right) ^{n}((2n+3) P_{2n+2}-(2n+3)P_{2n+1}-(4n+10)P_{2n}-(2n+5)P_{2n-1})-3).\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}P_{2k+1}=\frac{1}{4}(\left( -1\right) ^{n}((2n+3)P_{2n+2}-(2n+7)P_{2n+1}-2P_{2n}+(2n+3)P_{2n-1})+1).\)

Taking \(W_{n}=Q_{n}\) with \(Q_{0}=4,Q_{1}=2,Q_{2}=6,Q_{3}=17\) in the last proposition, we have the following corollary which presents linear sum formulas of the fourth-order Pell-Lucas numbers.

Corollary 10. For \(n\geq 0,\) fourth-order Pell-Lucas numbers have the following properties:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}Q_{k}=\frac{1}{4}(\left( -1\right) ^{n}((2n+7)Q_{n+3}-(6n+19) Q_{n+2}+(4n+6) Q_{n+1}-(2n+9)Q_{n})+19).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}Q_{2k}=\frac{1}{4}(\left( -1\right) ^{n}((2n+3) Q_{2n+2}-(2n+3)Q_{2n+1}-(4n+10)Q_{2n}-(2n+5)Q_{2n-1})+23).\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}Q_{2k+1}=\frac{1}{4}(\left( -1\right) ^{n}((2n+3)Q_{2n+2}-(2n+7)Q_{2n+1}-2Q_{2n}+(2n+3)Q_{2n-1})+7).\)

Observe that setting \(x=-1,r=1,s=1,t=1,u=2\) (i.e., for the generalized fourth order Jacobsthal case) in Theorem 3 (a), (b) and (c), makes the right hand side of the sum formulas to be an indeterminate form. Application of L'Hospital rule however provides the evaluation of the sum formulas.

Theorem 5. If \(r=1,s=1,t=1,u=2\) then for \(n\geq 0\) we have the following formulas:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)W_{n+3}+2(3n^{2}+2n-54)W_{n+2}-(3n^{2}-13n-53)W_{n+1}+2(3n^{2}+11n-45)W_{n})-53W_{3}+108W_{2}-53W_{1}+90W_{0}). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)W_{2n+2}+2(5n^{2}+2n-54)W_{2n+1}+(35n^{2}+54n-350)W_{2n}+2(5n^{2}+2n-54)W_{2n-1})+54W_{3}-213W_{2}+54W_{1}+296W_{0}). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)W_{2n+2}+(20n^{2}+58n-191)W_{2n+1}-(5n^{2}-8n-51)W_{2n}-2(15n^{2}-4n-159)W_{2n-1})-159W_{3}+108W_{2}+350W_{1}+108W_{0}). \)

Proof.

  • (a) We use Theorem 3 (a). If we set \( r=1,s=1,t=1,u=2\) in Theorem 3 (a) then we have \begin{equation*} \sum\limits_{k=0}^{n}x^{k}W_{k}=\frac{g_{3}(x)}{\left( 2x-1\right) ^{2}\left( x+1\right) ^{2}\left( x^{2}+1\right) ^{2}}\,, \end{equation*} where

    \(g_{3}(x)=x^{n+3}(2x+n(2x^{4}+x^{3}+x^{2}+x-1)+x^{2}-2x^{4}-3)W_{n+3}- x^{n+2}(2x^{2}-4x+2x^{3}+4x^{4}-2x^{5}+n(x-1)(2x^{4}+x^{3}+x^{2}+x-1)+2)W_{n+2}\)

    \(- x^{n+1}(4x^{3}-x^{2}-2x+6x^{4}-4x^{5}-2x^{6}+n(x^{2}+x-1)(2x^{4}+x^{3}+x^{2}+x-1)+1) W_{n+1}+2x^{n+4}(3x+n(2x^{4}+x^{3}+x^{2}+x-1)+2x^{2}+x^{3}-4)W_{n}+x^{3}(2x^{4}-x^{2}-2x+3)W_{3}\)

    \(+ x^{2}(-2x^{5}+4x^{4}+2x^{3}+2x^{2}-4x+2)W_{2}-x (2x^{6}+4x^{5}-6x^{4}-4x^{3}+x^{2}+2x-1)W_{1}- 2x^{4}(x^{3}+2x^{2}+3x-4)W_{0}. \)

    For \(x=-1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (twice). Then we get (b) using

    \( \sum\limits_{k=0}^{n}k(-1)^{k}W_{k} =\left. \frac{\frac{d^{2}}{dx^{2}}\left( g_{3}(x)\right) }{\frac{d^{2}}{dx^{2}}\left( \left( 2x-1\right) ^{2}\left( x+1\right) ^{2}\left( x^{2}+1\right) ^{2}\right) }\right\vert _{x=-1} =\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)W_{n+3}\)

    \(+2(3n^{2}+2n-54)W_{n+2} -(3n^{2}-13n-53)W_{n+1}+2(3n^{2}+11n-45)W_{n})-53W_{3} +108W_{2}-53W_{1}+90W_{0}). \)

  • (b) We use Theorem 3 (b). If we set \( r=1,s=1,t=1,u=2\) in Theorem 3 (b) then we have \begin{equation*} \sum\limits_{k=0}^{n}x^{k}W_{2k}=\frac{g_{4}(x)}{(4x-1)^{2}(x-1)^{2}(x+1)^{4}}\,, \end{equation*} where

    \(g_{4}(x)=- x^{n+1}(2x^{2}-2x+6x^{3}+x^{4}+8x^{5}+8x^{6}-n(2x^{2}+x-1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n+2}+x^{n+2}(12x^{2}+16x^{3}+16x^{4}+8x^{5}\)

    \(-n(2x+2)(4x^{4}+3x^{3}-5x^{2}-3x+1)-4)W_{2n+1}+ x^{n+2}(12x+10x^{2}-32x^{3}-16x^{4}+8x^{5}+n(4x^{2}+x-3)\left( (4x^{4}+3x^{3}-5x^{2}-3x+1\right) -6)W_{2n}\)

    \(+ 2x^{n+2}(6x^{2}+8x^{3}+8x^{4}+4x^{5}-n(x+1)(4x^{4}+3x^{3}-5x^{2}-3x+1)-2)W_{2n-1}-x^{2}(4x^{5}+8x^{4}+8x^{3}+6x^{2}-2)W_{3}+x(12x^{6}+16x^{5}\)

    \(+9x^{4}+12x^{3}+2x^{2}-4x+1) W_{2}- x^{2}(4x^{5}+8x^{4}+8x^{3}+6x^{2}-2)W_{1}- 2x^{2}(2x^{5}-12x^{4}-20x^{3}+2x^{2}+6x-2)W_{0}. \)

    For \(x=-1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (four times). Then we get (b) using

    \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k} =\left. \frac{\frac{d^{4}}{dx^{4}}\left( g_{4}(x)\right) }{\frac{d^{4}}{dx^{4}}\left( (4x-1)^{2}(x-1)^{2}(x+1)^{4}\right) }\right\vert _{x=-1} =\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)W_{2n+2}\)

    \(+2(5n^{2}+2n-54)W_{2n+1} +(35n^{2}+54n-350)W_{2n}+2(5n^{2}+2n-54)W_{2n-1}) +54W_{3}-213W_{2}+54W_{1}+296W_{0}). \)

  • (c) We use Theorem 3 (c). If we set \( r=1,s=1,t=1,u=2\) in Theorem 3 (c) then we have \begin{equation*} \sum\limits_{k=0}^{n}x^{k}W_{2k+1}=\frac{g_{5}(x)}{(4x-1)^{2}(x-1)^{2}(x+1)^{4}}\,, \end{equation*} where

    \(g_{5}(x)=-x^{n+1} (2x+2x^{2}-6x^{3}-15x^{4}-8x^{5}+n(x+1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n+2}- x^{n+1}(4x-10x^{2}-4x^{3}+33x^{4}+24x^{5}\)

    \(-n(4x^{3}+3x^{2}-2x-1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n+1}- x^{n+1}(2x+2x^{2}-6x^{3}-15x^{4}-8x^{5}+n(x+1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1) W_{2n}\)

    \(- 2x^{n+1}(2x^{2}-2x+6x^{3}+x^{4}+8x^{5}+8x^{6}-n(2x^{2}+x-1)(4x^{4}+3x^{3}-5x^{2}-3x+1)+1)W_{2n-1}\)

    \(+x(8x^{6}+8x^{5}+x^{4}+6x^{3}+2x^{2}-2x+1)W_{3}-x^{2}(8x^{5}+16x^{4}+16x^{3}+12x^{2}-4)W_{2}-x^{2} (8x^{5}-16x^{4}-32x^{3}+10x^{2}+12x-6)W_{1}- 2x^{2}(4x^{5}+8x^{4}+8x^{3}+6x^{2}-2)W_{0}. \)

    For \(x=-1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (four times). Then we get (c) using

    \( \sum\limits_{k=0}^{n}k(-1)^{k}W_{2k+1} =\left. \frac{\frac{d^{4}}{dx^{4}}\left( g_{5}(x)\right) }{\frac{d^{4}}{dx^{4}}\left( (4x-1)^{2}(x-1)^{2}(x+1)^{4}\right) }\right\vert _{x=-1} =\frac{1}{100}(\left( -1\right)^{n}(-(5n^{2}-8n-51)W_{2n+2}\)

    \(+(20n^{2}+58n-191)W_{2n+1} -(5n^{2}-8n-51)W_{2n}-2(15n^{2}-4n-159)W_{2n-1}) -159W_{3}+108W_{2}+350W_{1}+108W_{0}). \)

Taking \(W_{n}=J_{n}\) with \(J_{0}=0,J_{1}=1,J_{2}=1,J_{3}=1\) in the last theorem, we have the following corollary which presents linear sum formula of fourth-order Jacobsthal numbers.

Corollary 11. For \(n\geq 0,\) fourth order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}J_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)J_{n+3}+2(3n^{2}+2n-54)J_{n+2}-(3n^{2}-13n-53)J_{n+1}+2(3n^{2}+11n-45)J_{n})+2). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}J_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)J_{2n+2}+2(5n^{2}+2n-54)J_{2n+1}+(35n^{2}+54n-350)J_{2n}+2(5n^{2}+2n-54)J_{2n-1})-105). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}J_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)J_{2n+2}+(20n^{2}+58n-191)J_{2n+1}-(5n^{2}-8n-51)J_{2n}-2(15n^{2}-4n-159)J_{2n-1})+299). \)

From the last theorem, we have the following corollary which gives linear sum formula of the fourth order Jacobsthal-Lucas numbers (take \(W_{n}=j_{n}\) with \(j_{0}=2,j_{1}=1,j_{2}=5,j_{3}=10\)).

Corollary 12. For \(n\geq 0,\) fourth order Jacobsthal-Lucas numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}j_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)j_{n+3}+2(3n^{2}+2n-54)j_{n+2}-(3n^{2}-13n-53)j_{n+1}+2(3n^{2}+11n-45)j_{n})+137). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}j_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)j_{2n+2}+2(5n^{2}+2n-54)j_{2n+1}+(35n^{2}+54n-350)j_{2n}+2(5n^{2}+2n-54)j_{2n-1})+121). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}j_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)j_{2n+2}+(20n^{2}+58n-191)j_{2n+1}-(5n^{2}-8n-51)j_{2n}-2(15n^{2}-4n-159)j_{2n-1})-484). \)

Taking \(W_{n}=K_{n}\) with \(K_{0}=3,K_{1}=1,K_{2}=3,K_{3}=10\) in the last theorem, we have the following corollary which presents linear sum formula of the modified fourth order Jacobsthal numbers.

Corollary 13. For \(n\geq 0,\)modified fourth order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}K_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)K_{n+3}+2(3n^{2}+2n-54)K_{n+2}-(3n^{2}-13n-53)K_{n+1}+2(3n^{2}+11n-45)K_{n})+11). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}K_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)K_{2n+2}+2(5n^{2}+2n-54)K_{2n+1}+(35n^{2}+54n-350)K_{2n}+2(5n^{2}+2n-54)K_{2n-1})+843). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}K_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)K_{2n+2}+(20n^{2}+58n-191)K_{2n+1}-(5n^{2}-8n-51)K_{2n}-2(15n^{2}-4n-159)K_{2n-1})-592). \)

From the last theorem, we have the following corollary which gives linear sum formula of the fourth-order Jacobsthal Perrin numbers (take \(W_{n}=Q_{n}\) with \(Q_{0}=3,Q_{1}=0,Q_{2}=2,Q_{3}=8\)).

Corollary 14. For \(n\geq 0,\) fourth-order Jacobsthal Perrin numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}Q_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)Q_{n+3}+2(3n^{2}+2n-54)Q_{n+2}-(3n^{2}-13n-53)Q_{n+1}+2(3n^{2}+11n-45)Q_{n})+62). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}Q_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)Q_{2n+2}+2(5n^{2}+2n-54)Q_{2n+1}+(35n^{2}+54n-350)Q_{2n}+2(5n^{2}+2n-54)Q_{2n-1})+894). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}Q_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)Q_{2n+2}+(20n^{2}+58n-191)Q_{2n+1}-(5n^{2}-8n-51)Q_{2n}-2(15n^{2}-4n-159)Q_{2n-1})-732). \)

Taking \(W_{n}=S_{n}\) with \(S_{0}=0,S_{1}=1,S_{2}=1,S_{3}=2\) in the theorem, we have the following corollary which presents linear sum formula of the adjusted fourth-order Jacobsthal numbers.

Corollary 15. For \(n\geq 0,\) adjusted fourth-order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}S_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)S_{n+3}+2(3n^{2}+2n-54)S_{n+2}-(3n^{2}-13n-53)S_{n+1}+2(3n^{2}+11n-45)S_{n})-51). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}S_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)S_{2n+2}+2(5n^{2}+2n-54)S_{2n+1}+(35n^{2}+54n-350)S_{2n}+2(5n^{2}+2n-54)S_{2n-1})-51). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}S_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)S_{2n+2}+(20n^{2}+58n-191)S_{2n+1}-(5n^{2}-8n-51)S_{2n}-2(15n^{2}-4n-159)S_{2n-1})+140). \)

From the last theorem, we have the following corollary which gives linear sum formula of the modified fourth-order Jacobsthal-Lucas numbers (take \( W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\)).

Corollary 16. For \(n\geq 0,\) modified fourth-order Jacobsthal-Lucas numbers have the following property:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}R_{k}=\frac{1}{36}(\left( -1\right) ^{n}(-(3n^{2}+5n-53)R_{n+3}+2(3n^{2}+2n-54)R_{n+2}-(3n^{2}-13n-53)R_{n+1}+2(3n^{2}+11n-45)R_{n})+260). \)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}R_{2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}-4n-159)R_{2n+2}+2(5n^{2}+2n-54)R_{2n+1}+(35n^{2}+54n-350)R_{2n}+2(5n^{2}+2n-54)R_{2n-1})+977). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}R_{2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(5n^{2}-8n-51)R_{2n+2}+(20n^{2}+58n-191)R_{2n+1}-(5n^{2}-8n-51)R_{2n}-2(15n^{2}-4n-159)R_{2n-1})-7). \)

Taking \(x=-1,r=2,s=3,t=5,u=7\) in Theorem 3 (a), (b) and (c), we obtain the following proposition.

Proposition 3. If \(r=2,s=3,t=5,u=7\) then for \(n\geq 0\) we have the following formulas:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-11)W_{n+3}+(6n-35) W_{n+2}+8W_{n+1}+7(2n-9)W_{n})-11W_{3}+35W_{2}-8W_{1}+63W_{0}).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-7)W_{2n+2}+(6n+5)W_{2n+1}+72(n-1)W_{2n}+7(6n-13)W_{2n-1})+13W_{3}-33W_{2}-44W_{1}+7W_{0}). \)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}W_{2k+1}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-19)W_{2n+2}+3(18n-17)W_{2n+1}+4(3n-14)W_{2n}-7(6n-7)W_{2n-1})-7W_{3}-5W_{2}+72W_{1}+91W_{0}). \)

From the last proposition, we have the following corollary which gives linear sum formulas of 4-primes numbers (take \(W_{n}=G_{n}\) with \( G_{0}=0,G_{1}=0,G_{2}=1,G_{3}=2\)).

Corollary 17. For \(n\geq 0,\) 4-primes numbers have the following properties:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}G_{k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-11)G_{n+3}+(6n-35) G_{n+2}+8G_{n+1}+7(2n-9)G_{n})+13).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}G_{2k}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-7)G_{2n+2}+(6n+5)G_{2n+1}+72(n-1)G_{2n}+7(6n-13)G_{2n-1})-7).\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}G_{2k+1}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-19)G_{2n+2}+3(18n-17)G_{2n+1}+4(3n-14)G_{2n}-7(6n-7)G_{2n-1})-19).\)

Taking \(W_{n}=H_{n}\) with \(H_{0}=4,H_{1}=2,H_{2}=10,H_{3}=41\) in the last proposition, we have the following corollary which presents linear sum formulas of Lucas 4-primes numbers.

Corollary 18. For \(n\geq 0,\) Lucas 4-primes numbers have the following properties:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}H_{k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-11)H_{n+3}+(6n-35) H_{n+2}+8H_{n+1}+7(2n-9)H_{n})+135).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}H_{2k}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-7)H_{2n+2}+(6n+5)H_{2n+1}+72(n-1)H_{2n}+7(6n-13)H_{2n-1})+143).\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}H_{2k+1}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-19)H_{2n+2}+3(18n-17)H_{2n+1}+4(3n-14)H_{2n}-7(6n-7)H_{2n-1})+171). \)

From the last proposition, we have the following corollary which gives linear sum formulas of modified 4-primes numbers (take \(W_{n}=E_{n}\) with \( E_{0}=0,E_{1}=0,E_{2}=1,E_{3}=1\)).

Corollary 19. For \(n\geq 0,\) modified 4-primes numbers have the following properties:

  • (a) \(\sum\limits_{k=0}^{n}k(-1)^{k}E_{k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-11)E_{n+3}+(6n-35) E_{n+2}+8E_{n+1}+7(2n-9)E_{n})+24).\)
  • (b) \(\sum\limits_{k=0}^{n}k(-1)^{k}E_{2k}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-7)E_{2n+2}+(6n+5)E_{2n+1}+72(n-1)E_{2n}+7(6n-13)E_{2n-1})-20).\)
  • (c) \(\sum\limits_{k=0}^{n}k(-1)^{k}E_{2k+1}=\frac{1}{36}(\left( -1\right) ^{n}(-(6n-19)E_{2n+2}+3(18n-17)E_{2n+1}+4(3n-14)E_{2n}-7(6n-7)E_{2n-1})-12).\)

3.3. The case \(x=i\)

In this subsection we consider the special case \(x=i\). Taking \(x=i,r=s=t=u=1\) in Theorem 3 (a), (b) and (c), we obtain the following proposition.

Proposition 4. If \(x=i,r=s=t=u=1\) then for \(n\geq 0\) we have the following formulas:

  • (a) \(\sum\limits_{k=0}^{n}ki^{k}W_{k}=i^{n}(i(n+\left( 5-2i\right) )W_{n+3}+(1-i)(n+(\frac{9}{2}-\frac{5}{2} i))W_{n+2}+(-1-2i)(n+(4-2i))W_{n+1}-(n+(6-2i))W_{n})-(2+5i) W_{3}-(2-7i)W_{2}+(8+6i)W_{1}+(6-2i)W_{0}.\)
  • (b) \(\sum\limits_{k=0}^{n}ki^{k}W_{2k}=\frac{1}{9-40i}((-13-6i)i^{n}(n+( \frac{8}{41}-\frac{10}{41}i))W_{2n+2}+(14-3i)i^{n}(n+(\frac{81}{205}+\frac{32 }{205}i)) W_{2n+1}\)

    \(+(15-12i)i^{n}(n+(\frac{106}{123}-\frac{10}{41} i))W_{2n}+(9+i)i^{n}(n+(\frac{57}{82}+\frac{21}{82}i))W_{2n-1}-(6+3i) W_{3}+(10+i)W_{2}+2iW_{1}-(4-17i)W_{0}).\)

  • (c) \(\sum\limits_{k=0}^{n}ki^{k}W_{2k+1}=\frac{1}{9-40i}((1-9i)i^{n}(n-( \frac{25}{82}-\frac{21}{82}i))W_{2n+2}+(2-18i)i^{n}(n+(\frac{57}{82}+\frac{21 }{82}i))W_{2n+1}\)

    \(+(-4-5i)i^{n}(n-(\frac{33}{41}+\frac{10}{41} i))W_{2n}+(-13-6i)i^{n}(n+(\frac{8}{41}-\frac{10}{41}i))W_{2n-1}+(4-2i) W_{3}-(6+i)W_{2}-(10-14i)W_{1}-(6+3i)W_{0}).\)

From the above proposition, we have the following corollary which gives linear sum formulas of Tetranacci numbers (take \(W_{n}=M_{n}\) with \( M_{0}=0,M_{1}=1,M_{2}=1,M_{3}=2\)).

Corollary 20. For \(n\geq 0,\) Tetranacci numbers have the following properties.

  • (a) \(\sum\limits_{k=0}^{n}ki^{k}M_{k}=i^{n}(i(n+\left( 5-2i\right) )M_{n+3}+(1-i)(n+(\frac{9}{2}-\frac{5}{2} i))M_{n+2}+(-1-2i)(n+(4-2i))M_{n+1}-(n+(6-2i))M_{n})+(2+3i).\)
  • (b) \(\sum\limits_{k=0}^{n}ki^{k}M_{2k}=\frac{1}{9-40i}((-13-6i)i^{n}(n+( \frac{8}{41}-\frac{10}{41}i))M_{2n+2}+(14-3i)i^{n}(n+(\frac{81}{205}+\frac{32 }{205}i)) M_{2n+1}+(15-12i)i^{n}(n+(\frac{106}{123}-\frac{10}{41} i))M_{2n}+(9+i)i^{n}(n+(\frac{57}{82}+\frac{21}{82}i))M_{2n-1}+(-2-3i)).\)
  • (c) \(\sum\limits_{k=0}^{n}ki^{k}M_{2k+1}=\frac{1}{9-40i}((1-9i)i^{n}(n-( \frac{25}{82}-\frac{21}{82}i))M_{2n+2}+(2-18i)i^{n}(n+(\frac{57}{82}+\frac{21 }{82}i))M_{2n+1}+(-4-5i)i^{n}(n-(\frac{33}{41}+\frac{10}{41} i))M_{2n}+(-13-6i)i^{n}(n+(\frac{8}{41}-\frac{10}{41}i))M_{2n-1}+(-8+9i)).\)

Taking \(M_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\) in the above proposition, we have the following corollary which presents linear sum formulas of Tetranacci-Lucas numbers.

Corollary 21. For \(n\geq 0,\) Tetranacci-Lucas numbers have the following properties.

  • (a) \(\sum\limits_{k=0}^{n}ki^{k}R_{k}=i^{n}(i(n+\left( 5-2i\right) )R_{n+3}+(1-i)(n+(\frac{9}{2}-\frac{5}{2} i))R_{n+2}+(-1-2i)(n+(4-2i))R_{n+1}-(n+(6-2i))R_{n})+(12-16i).\)
  • (b) \(\sum\limits_{k=0}^{n}ki^{k}R_{2k}=\frac{1}{9-40i}((-13-6i)i^{n}(n+( \frac{8}{41}-\frac{10}{41}i))R_{2n+2}+(14-3i)i^{n}(n+(\frac{81}{205}+\frac{32 }{205}i)) R_{2n+1}+(15-12i)i^{n}(n+(\frac{106}{123}-\frac{10}{41} i))R_{2n}+(9+i)i^{n}(n+(\frac{57}{82}+\frac{21}{82}i))R_{2n-1}+(-28+52i)).\)
  • (c) \(\sum\limits_{k=0}^{n}ki^{k}R_{2k+1}=\frac{1}{9-40i}((1-9i)i^{n}(n-( \frac{25}{82}-\frac{21}{82}i))R_{2n+2}+(2-18i)i^{n}(n+(\frac{57}{82}+\frac{21 }{82}i))R_{2n+1}+(-4-5i)i^{n}(n-(\frac{33}{41}+\frac{10}{41} i))R_{2n}+(-13-6i)i^{n}(n+(\frac{8}{41}-\frac{10}{41}i))R_{2n-1}+(-24-15i)).\)

Corresponding sums of the other fourth order generalized Tetranacci numbers can be calculated similarly.

4. Linear sum formulas of generalized Tetranacci numbers with negative subscripts

The following Theorem present some linear sum formulas of generalized Tetranacci numbers with negative subscripts.

Theorem 6. Let \(x\) be a real or complex non-zero numbers. For \( n\geq 1\) we have the following formulas:

  • (a) If \(u+rx^{3}+sx^{2}+tx-x^{4}\neq 0,\) then \begin{equation*} \sum\limits_{k=1}^{n}kx^{k}W_{-k}=\frac{\Omega _{4}}{(u+rx^{3}+sx^{2}+tx-x^{4})^{2}}\,, \end{equation*} where

    \(\Omega _{4}=x^{n+1}(n(-u-rx^{3}-sx^{2}-tx+x^{4})-u+2rx^{3}+sx^{2}-3x^{4})W_{-n+3}+ x^{n+1}(n(r-x)(u+rx^{3}+sx^{2}+tx-x^{4})+4rx^{4}-tx^{2}-2r^{2}x^{3}\)

    \(+ru-2ux-2x^{5}-rsx^{2})W_{-n+2}+ x^{n+1}(n(s+rx-x^{2})(u+rx^{3}+sx^{2}+tx-x^{4})+2rx^{5}+2sx^{4}-2tx^{3}-3ux^{2}-r^{2}x^{4}-s^{2} x^{2}+su-x^{6}- 2rsx^{3}+rtx^{2}+2rux)W_{-n+1}\)

    \(+x^{n+1}(n(t+rx^{2}+sx-x^{3})(u+rx^{3}+sx^{2}+tx-x^{4})-4ux^{3}+tu+3rux^{2}+ 2sux)W_{-n}+ x(u-2rx^{3}-sx^{2}+3x^{4})W_{3}+ x(-4rx^{4}+tx^{2}+2r^{2}x^{3}-ru+2ux+2x^{5}+rsx^{2}) W_{2}\)

    \(+ x(-2rx^{5}-2sx^{4}+2tx^{3}+3ux^{2}+r^{2}x^{4}+s^{2}x^{2}-su+x^{6}+2rsx^{3}-rtx^{2}-2rux) W_{1}+ ux(-t-3rx^{2}-2sx+4x^{3})W_{0}. \)

  • (b) If \( r^{2}x^{3}+2rtx^{2}-s^{2}x^{2}-2sux+2sx^{3}+t^{2}x-u^{2}+2ux^{2}-x^{4}\neq 0\) then \begin{equation*} \sum\limits_{k=1}^{n}kx^{k}W_{-2k}=\frac{\Omega _{5}}{ (r^{2}x^{3}+2rtx^{2}-s^{2}x^{2}-2sux+2sx^{3}+t^{2}x-u^{2}+2ux^{2}-x^{4})^{2}}\,, \end{equation*} where

    \(\Omega _{5}=x^{n+1}(n(u+sx-x^{2})(2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)+2sx^{5}+ux^{4}-s^{2}x^{4}-2t^{2}x^{3}+u^{2} x^{2}-u^{3}-x^{6}\)

    \(-2rtx^{4}-2su^{2} x-r^{2}sx^{4}+st^{2}x^{2}-2r^{2}ux^{3}-s^{2}ux^{2}-2rtux^{2})W_{-2n+2}+ x^{n+1}(n(ru+tx+rsx)(-2sx^{3}-t^{2}x-2ux^{2}-r^{2}x^{3}+s^{2}x^{2}+u^{2}+x^{4}-2rtx^{2}+2sux) \)

    \(+ru^{3}-2tx^{5}-t^{3}x^{2}-2rsx^{5}- 3rux^{4}+2stx^{4}+2tu^{2}x+2rs^{2}x^{4}+ r^{3}sx^{4}+2ru^{2}x^{2}+r^{2}tx^{4}+2r^{3}ux^{3}+2rsu^{2} x+4rsux^{3}+2stux^{2}-rst^{2}x^{2}+rs^{2}ux^{2}+2r^{2}t ux^{2})W_{-2n+1}+ \)

    \(x^{n+1}(n(2sx^{2}-s^{2}x+r^{2}x^{2}-su+ux-x^{3}+rtx)(2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux) +su^{3}-2u^{3}x-2ux^{5}-3t^{2}x^{4}+4u^{2}x^{3}\)

    \(+ 2r^{2}t^{2}x^{3}-3r^{2}u^{2}x^{2}-s^{2}t^{2}x^{2}-2rtx^{5}+5sux^{4}+rt^{3}x^{2}+4st^{2}x^{3}+r^{3}tx^{4}-6su^{2}x^{2}+ r^{2}ux^{4}+2s^{2}u^{2}x-4s^{2}ux^{3}+s^{3}ux^{2}+t^{2}ux^{2}+2rstx^{4}-2rtu^{2}x-2r^{2}sux^{3})W_{-2n}+ \)

    \( ux^{n+1}(n(t+rx)(-2sx^{3}-t^{2}x-2ux^{2}-r^{2}x^{3}+s^{2}x^{2}+u^{2}+x^{4}-2rtx^{2}+2sux) +tu^{2}-2rx^{5}-3tx^{4}+r^{3}x^{4}+2rsx^{4}+2ru^{2}x+4stx^{3}+2tux^{2}\)

    \(+rt^{2}x^{2}+2r^{2}tx^{3}-s^{2}tx^{2}+2rsu x^{2})W_{-2n-1}-x(tu^{2}-2rx^{5}-3tx^{4}+r^{3}x^{4}+2rsx^{4}+2ru^{2}x+4stx^{3}+2tux^{2}+rt^{2}x^{2}+2r^{2}tx^{3}-s^{2}tx^{2}+2rsux^{2}) W_{3}+ x(-2sx^{5}-ux^{4}-2r^{2}x^{5}\)

    < p>\(+r^{4}x^{4}+s^{2}x^{4}+2t^{2}x^{3}-u^{2}x^{2}+u^{3}+x^{6}+r^{2}t^{2}x^{2}+rtu^{2}-rtx^{4}+2su^{2}x+3r^{2}sx^{4}-st^{2}x^{2}+2r^{2}u^{2}x+2r^{3}tx^{3}+2r^{2}ux^{3}+s^{2}ux^{2}+4rstx^{3}+4rtux^{2}-rs^{2}tx^{2}+2r^{2}sux^{2}) W_{2}\)

    \(-x(ru^{3}-2tx^{5}-t^{3}x^{2}-stu^{2}-3rux^{4}+5stx^{4}+2tu^{2}x+2ru^{2}x^{2}+r^{2}tx^{4}-4s^{2}tx^{3}+s^{3}tx^{2}+2r^{3}ux^{3}+4rsux^{3}-2rst^{2}x^{2}-2r^{2}stx^{3}-rs^{2}ux^{2}+2r^{2}tux^{2}) W_{1}+ \)

    \( ux(-su^{2}+t^{2}u-5sx^{4}+2u^{2}x-4ux^{3}-r^{2}x^{4}+4s^{2}x^{3}-s^{3}x^{2}+t^{2}x^{2}+2x^{5}+6sux^{2}-2s^{2}ux+2r^{2}sx^{3}+3r^{2}ux^{2}+2rstx^{2}+4rtux) W_{0}. \)

  • (c) If \( r^{2}x^{3}+2rtx^{2}-s^{2}x^{2}-2sux+2sx^{3}+t^{2}x-u^{2}+2ux^{2}-x^{4}\neq 0\) then \begin{equation*} \sum\limits_{k=1}^{n}kx^{k}W_{-2k+1}=\frac{\Omega _{6}}{ (r^{2}x^{3}+2rtx^{2}-s^{2}x^{2}-2sux+2sx^{3}+t^{2}x-u^{2}+2ux^{2}-x^{4})^{2}}\,, \end{equation*} where

    \(\Omega _{6}=x^{n+2}(n(t+rx)(-2sx^{3}-t^{2}x-2ux^{2}-r^{2}x^{3}+s^{2}x^{2}+u^{2}+x^{4}-2rtx^{2}+2sux) +2tu^{2}-rx^{5}-t^{3}x-2tx^{4}+3ru^{2}x-2rux^{3}+2stx^{3}+ rs^{2}x^{3}-2rt^{2}x^{2}-r^{2}tx^{3}\)

    \(+4rsux^{2}+2stux)W_{-2n+2}+ x^{n+2}(n(u+r^{2}x+rt+sx-x^{2})(2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)+2s^{2}x^{4}-s^{3}x^{3}-3t^{2} x^{3}\)

    \(+4u^{2}x^{2}-2u^{3}-r^{2}s^{2}x^{3}+2r^{2}t^{2}x^{2}-2rtu^{2}+rt^{3}x-2rtx^{4}-5su^{2}x+ 6sux^{3}+t^{2}ux-sx^{5}-2ux^{4}+2st^{2}x^{2}-3r^{2}u^{2} x+r^{3}tx^{3}+r^{2}ux^{3}-4s^{2}ux^{2}-2rstux-4 r^{2}sux^{2})W_{-2n+1}+ \)

    \( x^{n+2}(n(ru-st+tx)(-2sx^{3}-t^{2}x-2ux^{2}-r^{2}x^{3}+s^{2}x^{2}+u^{2}+x^{4}-2rtx^{2}+2sux) +2ru^{3}-tx^{5}-2t^{3}x^{2}-2stu^{2}+ st^{3}x-2rux^{4}+2stx^{4}+3tu^{2}x-2tux^{3}-2rt^{2}x^{3}-s^{2}tx^{3}+r^{3}ux^{3}\)

    \(+2rs u^{2}x+2rsux^{3}-rt^{2}ux+4stux^{2}-2s^{2}tux-r^{2}stx^{3})W_{-2n}+ ux^{n+1}(n(u+sx-x^{2})(2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux) +2sx^{5}+ux^{4}-s^{2}x^{4}-2t^{2}x^{3}\)

    \(+u^{2} x^{2}-u^{3}-x^{6}-2rtx^{4}-2su^{2} x-r^{2}sx^{4}+st^{2}x^{2}-2r^{2}ux^{3}-s^{2}ux^{2}-2rtux^{2})W_{-2n-1}+ x(-2sx^{5}-ux^{4}+s^{2}x^{4}+2t^{2}x^{3}-u^{2}x^{2}+u^{3}+x^{6}+2rtx^{4}+2su^{2}x\)

    \(+r^{2}sx^{4}-st^{2}x^{2}+2r^{2}ux^{3}+s^{2}ux^{2}+2rtux^{2}) W_{3}-x(ru^{3}-2tx^{5}-t^{3}x^{2}-2rsx^{5}-3rux^{4}+2stx^{4}+2tu^{2}x+2rs^{2}x^{4}+r^{3}sx^{4}+2ru^{2}x^{2}+r^{2}tx^{4}+2r^{3}ux^{3}\)

    \(+2rsu^{2}x+4rsux^{3}+2stux^{2}-rst^{2}x^{2}+rs^{2}ux^{2}+2r^{2}tux^{2}) W_{2}\)

    \(-x(su^{3}-2u^{3}x-2ux^{5}-3t^{2}x^{4}+4u^{2}x^{3}+2r^{2}t^{2}x^{3}-3r^{2}u^{2}x^{2}-s^{2}t^{2}x^{2}-2rtx^{5}+5sux^{4}+rt^{3}x^{2}+4st^{2}x^{3}+r^{3}tx^{4}-6su^{2}x^{2}+r^{2}ux^{4}+2s^{2}u^{2}x-4s^{2}ux^{3}\)

    \(+s^{3}ux^{2}+t^{2}ux^{2}+2rstx^{4}-2rtu^{2}x-2r^{2}sux^{3}) W_{1}-ux(tu^{2}-2rx^{5}-3tx^{4}+r^{3}x^{4}+2rsx^{4}+2ru^{2}x+4stx^{3}+2tux^{2}+rt^{2}x^{2}+2r^{2}tx^{3}-s^{2}tx^{2}+2rsux^{2}) W_{0}. \)

Proof.

  • (a) Using the recurrence relation \begin{equation*} W_{-n+4}=r\times W_{-n+3}+s\times W_{-n+2}+t\times W_{-n+1}+u\times W_{-n}\, \end{equation*} i.e., \begin{equation*} uW_{-n}=W_{-n+4}-rW_{-n+3}-sW_{-n+2}-tW_{-n+1}\,. \end{equation*} we obtain \begin{eqnarray*} unx^{n}W_{-n} &=&nx^{n}W_{-n+4}-rnx^{n}W_{-n+3}-snx^{n}W_{-n+2}-tnx^{n}W_{-n+1}, \\ u(n-1)x^{n-1}W_{-n+1} &=&(n-1)x^{n-1}W_{-n+5}-r(n-1)x^{n-1}W_{-n+4} \\ &&-s(n-1)x^{n-1}W_{-n+3}-t(n-1)x^{n-1}W_{-n+2} ,\\ u(n-2)x^{n-2}W_{-n+2} &=&(n-2)x^{n-2}W_{-n+6}-r(n-2)x^{n-2}W_{-n+5} \\ &&-s(n-2)x^{n-2}W_{-n+4}-t(n-2)x^{n-2}W_{-n+3} ,\\ &&\vdots \\ u\times 5\times W_{-5} &=&5\times W_{-1}-r\times 5\times W_{-2}-s\times 5\times W_{-3}-t\times 5\times W_{-4} ,\\ u\times 4\times x^{4}W_{-4} &=&4\times x^{4}W_{0}-r\times 4\times x^{4}W_{-1}-s\times 4\times x^{4}W_{-2}-t\times 4\times x^{4}W_{-3}, \\ u\times 3\times x^{3}W_{-3} &=&3\times x^{3}W_{1}-r\times 3\times x^{3}W_{0}-s\times 3\times x^{3}W_{-1}-t\times 3\times x^{3}W_{-2} ,\\ u\times 2\times x^{2}W_{-2} &=&2\times x^{2}W_{2}-r\times 2\times x^{2}W_{1}-s\times 2\times x^{2}W_{0}-t\times 2\times x^{2}W_{-1} ,\\ u\times 1\times x^{1}W_{-1} &=&1\times x^{1}W_{3}-r\times 1\times x^{1}W_{2}-s\times 1\times x^{1}W_{1}-t\times 1\times x^{1}W_{0}. \end{eqnarray*} If we add the above equations side by side (and using Theorem 2 (a)), we get (a)
  • (b) and (c) Using the recurrence relation \begin{equation*} W_{-n+4}=rW_{-n+3}+sW_{-n+2}+tW_{-n+1}+uW_{-n}\,, \end{equation*} i.e., \begin{equation*} tW_{-n+1}=W_{-n+4}-rW_{-n+3}-sW_{-n+2}-uW_{-n}\,, \end{equation*} we obtain \begin{eqnarray*} tnx^{n}W_{-2n+1} &=&nx^{n}W_{-2n+4}-rnx^{n}W_{-2n+3}-snx^{n}W_{-2n+2}-unx^{n}W_{-2n} ,\\ t(n-1)x^{n-1}W_{-2n+3} &=&(n-1)x^{n-1}W_{-2n+6}-r(n-1)x^{n-1}W_{-2n+5} \\ &&-s(n-1)x^{n-1}W_{-2n+4}-u(n-1)x^{n-1}W_{-2n+2} ,\\ t(n-2)x^{n-2}W_{-2n+5} &=&(n-2)x^{n-2}W_{-2n+8}-r(n-2)x^{n-2}W_{-2n+7} \\ &&-s(n-2)x^{n-2}W_{-2n+6}-u(n-2)x^{n-2}W_{-2n+4} ,\\ &&\vdots \\ t\times 3\times x^{3}W_{-5} &=&3\times x^{3}W_{-2}-r\times 3\times x^{3}W_{-3}-s\times 3\times x^{3}W_{-4}-u\times 3\times x^{3}W_{-6}, \\ t\times 2\times x^{2}W_{-3} &=&2\times x^{2}W_{0}-r\times 2\times x^{2}W_{-1}-s\times 2\times x^{2}W_{-2}-u\times 2\times x^{2}W_{-4} ,\\ t\times 1\times x^{1}W_{-1} &=&1\times x^{1}W_{2}-r\times 1\times x^{1}W_{1}-s\times 1\times x^{1}W_{0}-u\times 1\times x^{1}W_{-2}. \end{eqnarray*} If we add the equations side by side, we get
    \begin{align}\label{equati:weqratxz} t\sum\limits_{k=1}^{n}kx^{k}W_{-2k+1} =&(-(n+1)x^{n+1}W_{-2n+2}-(n+2)x^{n+2}W_{-2n}+2\times x^{2}W_{0} +1\times x^{1}W_{2}+x^{2}\sum\limits_{k=1}^{n}kx^{k}W_{-2k}\notag \\ &+2x^{2}\sum\limits_{k=1}^{n}x^{k}W_{-2k}) -r(-(n+1)x^{n+1}W_{-2n+1}+1\times x^{1}W_{1}+x^{1}\sum\limits_{k=1}^{n}kx^{k}W_{-2k+1}+x^{1} \sum\limits_{k=1}^{n}x^{k}W_{-2k+1}) \notag \\ &-s(-(n+1)x^{n+1}W_{-2n}+1\times x^{1}W_{0}+x^{1}\sum\limits_{k=1}^{n}kx^{k}W_{-2k}+x^{1}\sum\limits_{k=1}^{n}x^{k}W_{-2k}) -u(\sum\limits_{k=1}^{n}kx^{k}W_{-2k}). \end{align}
    (4)
    Similarly, using the recurrence relation \begin{equation*} W_{-n+4}=rW_{-n+3}+sW_{-n+2}+tW_{-n+1}+uW_{-n}\,, \end{equation*} i.e., \begin{equation*} tW_{-n}=W_{-n+3}-rW_{-n+2}-sW_{-n+1}-uW_{-n-1}\,, \end{equation*} we obtain \begin{eqnarray*} tnx^{n}W_{-2n} &=&nx^{n}W_{-2n+3}-rnx^{n}W_{-2n+2}-snx^{n}W_{-2n+1}-unx^{n}W_{-2n-1} ,\\ t(n-1)x^{n-1}W_{-2n+2} &=&(n-1)\times x^{n-1}W_{-2n+5}-r(n-1)x^{n-1}W_{-2n+4} \\ &&-s(n-1)x^{n-1}W_{-2n+3}-u(n-1)x^{n-1}W_{-2n+1} ,\\ t(n-2)x^{n-2}W_{-2n+4} &=&(n-2)\times x^{n-2}W_{-2n+7}-r(n-2)x^{n-2}W_{-2n+6} \\ &&-s(n-2)x^{n-2}W_{-2n+5}-u(n-2)x^{n-2}W_{-2n+3}, \\ &&\vdots \\ t\times 3\times x^{3}W_{-6} &=&3\times x^{3}W_{-3}-r\times 3\times x^{3}W_{-4}-s\times 3\times x^{3}W_{-5}-u\times 3\times x^{3}W_{-7}, \\ t\times 2\times x^{2}W_{-4} &=&2\times x^{2}W_{-1}-r\times 2\times x^{2}W_{-2}-s\times 2\times x^{2}W_{-3}-u\times 2\times x^{2}W_{-5} ,\\ t\times 1\times x^{1}W_{-2} &=&1\times x^{1}W_{1}-r\times 1\times x^{1}W_{0}-s\times 1\times x^{1}W_{-1}-u\times 1\times x^{1}W_{-3}. \end{eqnarray*} If we add the equations side by side, we get
    \begin{eqnarray}\label{equati:hutysd} t\sum\limits_{k=1}^{n}kx^{k}W_{-2k} &=&(-(n+1)x^{n+1}W_{-2n+1}+1\times x^{1}W_{1}+x^{1}\sum\limits_{k=1}^{n}kx^{k}W_{-2k+1}+x^{1} \sum\limits_{k=1}^{n}x^{k}W_{-2k+1}) \notag \\ &&-r(-(n+1)x^{n+1}W_{-2n}+1\times x^{1}W_{0}+x^{1}\sum\limits_{k=1}^{n}kx^{k}W_{-2k}+x^{1}\sum\limits_{k=1}^{n}x^{k}W_{-2k}) \notag \\ &&-s(\sum\limits_{k=1}^{n}kx^{k}W_{-2k+1})-u(nx^{n}W_{-2n-1}+x^{-1} \sum\limits_{k=1}^{n}kx^{k}W_{-2k+1}-x^{-1}\sum\limits_{k=1}^{n}x^{k}W_{-2k+1}). \end{eqnarray}
    (5)
    Then, solving system (4)-(5) (using Theorem 2 (b) and (c)), the required result of (b) and (c) follow.

Remark 2. Note that the proof of Theorem 6 can be done by taking the derivative of the formulas in Theorem 2. In fact, since \begin{eqnarray*} \sum\limits_{k=1}^{n}x^{k}W_{-k} &=&\frac{\Theta _{4}(x)}{rx^{3}+sx^{2}+tx+u-x^{4}}, \\ \sum\limits_{k=1}^{n}x^{k}W_{-2k} &=&\frac{x\Theta _{5}(x)}{ 2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux}, \\ \sum\limits_{k=1}^{n}x^{k}W_{-2k+1} &=&\frac{x\Theta _{6}(x)}{ 2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux}, \end{eqnarray*} by taking the derivative of the both sides of the above formulas with respect to \(x\), we get \begin{eqnarray*} \sum\limits_{k=1}^{n}kx^{k-1}W_{-k} &=&\frac{(rx^{3}+sx^{2}+tx+u-x^{4})\Theta _{4}^{^{\prime }}(x)-(-4x^{3}+3rx^{2}+2sx+t)\Theta _{4}(x)}{ (rx^{3}+sx^{2}+tx+u-x^{4})^{2}}, \\ \sum\limits_{k=1}^{n}kx^{k-1}W_{-2k} &=&\frac{ \begin{array}{c} (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)( \Theta _{5}(x)+x\Theta _{5}^{^{\prime }}(x)) \\ -(3r^{2}x^{2}+4rtx-2s^{2}x+6sx^{2}-2us+t^{2}-4x^{3}+ 4ux)x\Theta _{5}(x) \end{array} }{ (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)^{2}} , \\ \sum\limits_{k=1}^{n}kx^{k-1}W_{-2k+1} &=&\frac{ \begin{array}{c} (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)( \Theta _{6}(x)+x\Theta _{6}^{^{\prime }}(x)) \\ -(3r^{2}x^{2}+4rtx-2s^{2}x+6sx^{2}-2us+t^{2}-4x^{3}+ 4ux)x\Theta _{6}(x) \end{array} }{ (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)^{2}} , \end{eqnarray*} i.e., \begin{eqnarray*} \sum\limits_{k=1}^{n}kx^{k}W_{-k} &=&x\frac{(rx^{3}+sx^{2}+tx+u-x^{4})\Theta _{4}^{^{\prime }}(x)-(-4x^{3}+3rx^{2}+2sx+t)\Theta _{4}(x)}{ (rx^{3}+sx^{2}+tx+u-x^{4})^{2}}, \\ \sum\limits_{k=1}^{n}kx^{k}W_{-2k} &=&x\frac{ \begin{array}{c} (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)( \Theta _{5}(x)+x\Theta _{5}^{^{\prime }}(x)) \\ -(3r^{2}x^{2}+4rtx-2s^{2}x+6sx^{2}-2us+t^{2}-4x^{3}+ 4ux)x\Theta _{5}(x) \end{array} ,}{ (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)^{2}} \\ \sum\limits_{k=1}^{n}kx^{k}W_{-2k+1} &=&x\frac{ \begin{array}{c} (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)( \Theta _{6}(x)+x\Theta _{6}^{^{\prime }}(x)) \\ -(3r^{2}x^{2}+4rtx-2s^{2}x+6sx^{2}-2us+t^{2}-4x^{3}+ 4ux)x\Theta _{6}(x) \end{array} }{ (2sx^{3}+t^{2}x+2ux^{2}+r^{2}x^{3}-s^{2}x^{2}-u^{2}-x^{4}+2rtx^{2}-2sux)^{2}} , \end{eqnarray*} where \(\Theta _{4}^{^{\prime }}(x),\) \(\Theta _{5}^{^{\prime }}(x)\) and \( \Theta _{6}^{^{\prime }}(x)\) denotes the derivatives of \(\Theta _{4}(x),\) \( \Theta _{5}(x)\) and \(\Theta _{6}(x)\) respectively.

5. Specific cases

In this section, for the specific cases of \(x,\) we present the closed form solutions (identities) of the sums \(\sum\limits_{k=1}^{n}kx^{k}W_{-k},\) \( \sum\limits_{k=1}^{n}kx^{k}W_{-2k}\) and \(\sum\limits_{k=1}^{n}kx^{k}W_{-2k+1}\) for the specific case of sequence \(\{W_{n}\}.\)

5.1. The case \(x=1\)

In this subsection we consider the special case \(x=1\).

The case \(x=1\) of Theorem 6 is given in Soykan [34].

We only consider the cases \(x=1,r=1,s=1,t=1,u=2\) (which is not considered in [34]).

Observe that setting \(x=1,r=1,s=1,t=1,u=2\) (i.e., for the generalized fourth order Jacobsthal case) in Theorem 6 (a),(b),(c) makes the right hand side of the sum formulas to be an indeterminate form. Application of L'Hospital rule however provides the evaluation of the sum formulas.

Taking \(r=1,s=1,t=1,u=2\) in Theorem 6, we obtain the following theorem.

Theorem 7. If \(r=1,s=1,t=1,u=2\) then for \(n\geq 1\) we have the following formulas:

  • (a) \(\sum\limits_{k=1}^{n}kW_{-k}=\frac{1}{16}(-(4n+2) W_{-n+3}-4W_{-n+2}+(4n-2)W_{-n+1}+(8n+4)W_{-n}+2W_{3}+4W_{2}+2W_{1}-4W_{0}).\)
  • (b) \(\sum\limits_{k=1}^{n}kW_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)W_{-2n+2}-2(6n^{2}+4n-19)W_{-2n+1}+(6n^{2}+28n-11)W_{-2n}-2(6n^{2}+4n-19)W_{-2n-1}-19W_{3}+48W_{2}-19W_{1}+30W_{0}). \)
  • (c) \(\sum\limits_{k=1}^{n}kW_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)W_{-2n+2}+4(3n^{2}+5n-10)W_{-2n+1}-(6n^{2}+16n-9)W_{-2n}+2(6n^{2}-8n-29)W_{-2n-1}+29W_{3}-38W_{2}+11W_{1}-38W_{0}). \)

Proof.

  • (a) We use Theorem 6 (a). If we set \( r=1,s=1,t=1,u=2\) in Theorem 6 (a) we get (a).
  • (b) We use Theorem 6 (b). If we set \( r=1,s=1,t=1,u=2\) in Theorem 6 (b) then we have \begin{equation*} \sum\limits_{k=1}^{n}kx^{k}W_{-2k}=\frac{g_{6}(x)}{\left( x-1\right) ^{2}\left( x-4\right) ^{2}\left( x+1\right) ^{4}} \end{equation*} where

    \(g_{6}(x)=- x^{n+1}(8x+x^{2}+6x^{3}+2x^{4}-2x^{5}+x^{6}+n(-x^{2}+x+2)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n+2}\)

    \(+ x^{n+1}(16x+16x^{2}+12x^{3}-4x^{5}+n(2x+2)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n+1}- x^{n+1}(16x+32x^{2}-10x^{3}\)

    \(-12x^{4}+6x^{5}+n(-x^{3}+3x^{2}+2x-2)(x^{4}-3x^{3}-5x^{2}+3x+4)-8)W_{-2n}+ 2x^{n+1}(8x+8x^{2}+6x^{3}-2x^{5}+n(x+1)(x^{4}-3x^{3}-5x^{2}+3x+4)+4)W_{-2n-1}-x(-2x^{5}\)

    \(+6x^{3}+8x^{2}+8x+4)W_{3}+x(x^{6}-4x^{5}+2x^{4}+12x^{3}+9x^{2}+16x+12)W_{2}-x(-2x^{5}+6x^{3}+8x^{2}+8x+4) W_{1}- 2x(-2x^{5}+6x^{4}+2x^{3}-20x^{2}-12x+2)W_{0}\,. \)

    For \(x=1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (twice). Then we get (b) using

    \( \sum\limits_{k=1}^{n}kW_{-2k} =\left. \frac{\frac{d^{2}}{dx^{2}}\left( g_{6}(x)\right) }{\frac{d^{2}}{dx^{2}}\left( \left( x-1\right) ^{2}\left( x-4\right) ^{2}\left( x+1\right) ^{4}\right) }\right\vert _{x=1} =\frac{1}{72}((6n^{2}-8n-29)W_{-2n+2}-2(6n^{2}+4n-19)W_{-2n+1} +(6n^{2}+28n-11)W_{-2n}-2(6n^{2}+4n-19)W_{-2n-1} -19W_{3}+48W_{2}-19W_{1}+30W_{0}). \)

  • (c) We use Theorem 6 (c). If we set \( r=1,s=1,t=1,u=2\) in Theorem 6 (c) then we have \begin{equation*} \sum\limits_{k=1}^{n}kx^{k}W_{-2k+1}=\frac{g_{7}(x)}{\left( x-1\right) ^{2}\left( x-4\right) ^{2}\left( x+1\right) ^{4}}\,, \end{equation*} where

    \( g_{7}(x)=x^{n+2}(15x+6x^{2}-2x^{3}-2x^{4}-x^{5}+n(x+1)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n+2}- x^{n+2}(33x-4x^{2}-10x^{3}+4x^{4}+x^{5}+n(-x^{2}+2x+3)(x^{4}-3x^{3}-5x^{2}+3x+4)+24)W_{-2n+1}\)

    \(+x^{n+2}(15x+6x^{2}-2x^{3}-2x^{4}-x^{5}+n(x+1)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n}- 2x^{n+1}(8x+x^{2}+6x^{3}+2x^{4}-2x^{5}+x^{6}+n(-x^{2}+x+2)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n-1}\)

    \(+x (x^{6}-2x^{5}+2x^{4}+6x^{3}+x^{2}+8x+8)W_{3}-x(-4x^{5}+12x^{3}+16x^{2}+16x+8)W_{2}+ x(6x^{5}-12x^{4}-10x^{3}+32x^{2}+16x-8)W_{1}-2x (-2x^{5}+6x^{3}+8x^{2}+8x+4)W_{0}. \)

    For \(x=1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (twice). Then we get (c) using

    \( \sum\limits_{k=1}^{n}kW_{-2k+1} =\left. \frac{\frac{d^{2}}{dx^{2}}\left( g_{7}(x)\right) }{\frac{d^{2}}{dx^{2}}\left( \left( x-1\right) ^{2}\left( x-4\right) ^{2}\left( x+1\right) ^{4}\right) }\right\vert _{x=1} =\frac{1}{72}(-(6n^{2}+16n-9)W_{-2n+2}+4(3n^{2}+5n-10)W_{-2n+1} -(6n^{2}+16n-9)W_{-2n}+2(6n^{2}-8n-29)W_{-2n-1} \)

    \(+29W_{3}-38W_{2}+11W_{1}-38W_{0}). \)

Taking \(W_{n}=J_{n}\) with \(J_{0}=0,J_{1}=1,J_{2}=1,J_{3}=1\) in the last theorem, we have the following corollary which presents linear sum formula of fourth-order Jacobsthal numbers.

Corollary 22. For \(n\geq 1,\) fourth order Jacobsthal numbers have the following property

  • (a) \(\sum\limits_{k=1}^{n}kJ_{-k}=\frac{1}{16}(-(4n+2) J_{-n+3}-4J_{-n+2}+(4n-2)J_{-n+1}+(8n+4)J_{-n}+8).\)
  • (b) \(\sum\limits_{k=1}^{n}kJ_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)J_{-2n+2}-2(6n^{2}+4n-19)J_{-2n+1}+(6n^{2}+28n-11)J_{-2n}-2(6n^{2}+4n-19)J_{-2n-1}+10). \)
  • (c) \(\sum\limits_{k=1}^{n}kJ_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)J_{-2n+2}+4(3n^{2}+5n-10)J_{-2n+1}-(6n^{2}+16n-9)J_{-2n}+2(6n^{2}-8n-29)J_{-2n-1}+2). \)

From the last theorem, we have the following corollary which gives linear sum formulas of the fourth order Jacobsthal-Lucas numbers (take \(W_{n}=j_{n}\) with \(j_{0}=2,j_{1}=1,j_{2}=5,j_{3}=10\)).

Corollary 23. For \(n\geq 1,\) fourth order Jacobsthal-Lucas numbers have the following property

  • (a) \(\sum\limits_{k=1}^{n}kj_{-k}=\frac{1}{16}(-(4n+2) j_{-n+3}-4j_{-n+2}+(4n-2)j_{-n+1}+(8n+4)j_{-n}+34).\)
  • (b) \(\sum\limits_{k=1}^{n}kj_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)j_{-2n+2}-2(6n^{2}+4n-19)j_{-2n+1}+(6n^{2}+28n-11)j_{-2n}-2(6n^{2}+4n-19)j_{-2n-1}+91). \)
  • (c) \(\sum\limits_{k=1}^{n}kj_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)j_{-2n+2}+4(3n^{2}+5n-10)j_{-2n+1}-(6n^{2}+16n-9)j_{-2n}+2(6n^{2}-8n-29)j_{-2n-1}+35). \)

Taking \(W_{n}=K_{n}\) with \(K_{0}=3,K_{1}=1,K_{2}=3,K_{3}=10\) in the last theorem, we have the following corollary which presents linear sum formula of the modified fourth order Jacobsthal numbers.

Corollary 24. For \(n\geq 1,\)modified fourth order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}kK_{-k}=\frac{1}{16}(-(4n+2) K_{-n+3}-4K_{-n+2}+(4n-2)K_{-n+1}+(8n+4)K_{-n}+22).\)
  • (b) \(\sum\limits_{k=1}^{n}kK_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)K_{-2n+2}-2(6n^{2}+4n-19)K_{-2n+1}+(6n^{2}+28n-11)K_{-2n}-2(6n^{2}+4n-19)K_{-2n-1}+25). \)
  • (c) \(\sum\limits_{k=1}^{n}kK_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)K_{-2n+2}+4(3n^{2}+5n-10)K_{-2n+1}-(6n^{2}+16n-9)K_{-2n}+2(6n^{2}-8n-29)K_{-2n-1}+73). \)

From the last theorem, we have the following corollary which gives linear sum formula of the fourth-order Jacobsthal Perrin numbers (take \(W_{n}=Q_{n}\) with \(Q_{0}=3,Q_{1}=0,Q_{2}=2,Q_{3}=8\)).

Corollary 25. For \(n\geq 1,\) fourth-order Jacobsthal Perrin numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}kQ_{-k}=\frac{1}{16}(-(4n+2) Q_{-n+3}-4Q_{-n+2}+(4n-2)Q_{-n+1}+(8n+4)Q_{-n}+12).\)
  • (b) \(\sum\limits_{k=1}^{n}kQ_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)Q_{-2n+2}-2(6n^{2}+4n-19)Q_{-2n+1}+(6n^{2}+28n-11)Q_{-2n}-2(6n^{2}+4n-19)Q_{-2n-1}+34). \)
  • (c) \(\sum\limits_{k=1}^{n}kQ_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)Q_{-2n+2}+4(3n^{2}+5n-10)Q_{-2n+1}-(6n^{2}+16n-9)Q_{-2n}+2(6n^{2}-8n-29)Q_{-2n-1}+42). \)

Taking \(W_{n}=S_{n}\) with \(S_{0}=0,S_{1}=1,S_{2}=1,S_{3}=2\) in the last theorem, we have the following corollary which presents linear sum formula of the adjusted fourth-order Jacobsthal numbers.

Corollary 26. For \(n\geq 1,\) adjusted fourth-order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}kS_{-k}=\frac{1}{16}(-(4n+2) S_{-n+3}-4S_{-n+2}+(4n-2)S_{-n+1}+(8n+4)S_{-n}+10).\)
  • (b) \(\sum\limits_{k=1}^{n}kS_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)S_{-2n+2}-2(6n^{2}+4n-19)S_{-2n+1}+(6n^{2}+28n-11)S_{-2n}-2(6n^{2}+4n-19)S_{-2n-1}-9). \)
  • (c) \(\sum\limits_{k=1}^{n}kS_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)S_{-2n+2}+4(3n^{2}+5n-10)S_{-2n+1}-(6n^{2}+16n-9)S_{-2n}+2(6n^{2}-8n-29)S_{-2n-1}+31). \)

From the last theorem, we have the following corollary which gives linear sum formula of the modified fourth-order Jacobsthal-Lucas numbers (take \( W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\)).

Corollary 27. For \(n\geq 1,\) modified fourth-order Jacobsthal-Lucas numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}kR_{-k}=\frac{1}{16}(-(4n+2) R_{-n+3}-4R_{-n+2}+(4n-2)R_{-n+1}+(8n+4)R_{-n}+12).\)
  • (b) \(\sum\limits_{k=1}^{n}kR_{-2k}=\frac{1}{72} ((6n^{2}-8n-29)R_{-2n+2}-2(6n^{2}+4n-19)R_{-2n+1}+(6n^{2}+28n-11)R_{-2n}-2(6n^{2}+4n-19)R_{-2n-1}+112). \)
  • (c) \(\sum\limits_{k=1}^{n}kR_{-2k+1}=\frac{1}{72} (-(6n^{2}+16n-9)R_{-2n+2}+4(3n^{2}+5n-10)R_{-2n+1}-(6n^{2}+16n-9)R_{-2n}+2(6n^{2}-8n-29)R_{-2n-1}-52). \)

5.2. The case \(x=-1\)

In this subsection we consider the special case \(x=-1\).

Taking \(x=-1,\) \(r=s=t=u=1\) in Theorem 6 (a) and (b) (or (c)), we obtain the following proposition.

Proposition 5. If \(r=s=t=u=1\) then for \(n\geq 1\) we have the following formulas:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-k}= \left( -1\right) ^{n}(-(n-5)W_{-n+3}+(2n-9)W_{-n+2}-(n-2)W_{-n+1}+(2n-6)W_{-n})-5W_{3}+9W_{2}-2W_{1}+6W_{0}. \)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k}=\left( -1\right) ^{n}(-(n-2)W_{-2n+2}+ (n-3)W_{-2n+1}+ (2n-2)W_{-2n}+W_{-2n-1})-W_{3}-W_{2}+4W_{1}+3W_{0}.\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k+1}=\left( -1\right) ^{n}(-W_{-2n+2}+ nW_{-2n+1}-(n-3)W_{-2n}- (n-2)W_{-2n-1})-2W_{3}+3W_{2}+2W_{1}-W_{0}.\)

From the above proposition, we have the following corollary which gives linear sum formulas of Tetranacci numbers (take \(W_{n}=M_{n}\) with \( M_{0}=0,M_{1}=1,M_{2}=1,M_{3}=2\)).

Corollary 28. For \(n\geq 1,\) Tetranacci numbers have the following properties.

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}M_{-k}= \left( -1\right) ^{n}(-(n-5)M_{-n+3}+(2n-9)M_{-n+2}-(n-2)M_{-n+1}+(2n-6)M_{-n})-3.\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}M_{-2k}=\left( -1\right) ^{n}(-(n-2)M_{-2n+2}+ (n-3)M_{-2n+1}+ (2n-2)M_{-2n}+M_{-2n-1})+1.\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}M_{-2k+1}=\left( -1\right) ^{n}(-M_{-2n+2}+ nM_{-2n+1}-(n-3)M_{-2n}- (n-2)M_{-2n-1})+1.\)

Taking \(W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\) in the above proposition, we have the following corollary which presents linear sum formulas of Tetranacci-Lucas numbers.

Corollary 29. For \(n\geq 1,\) Tetranacci-Lucas numbers have the following properties.

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}R_{-k}= \left( -1\right) ^{n}(-(n-5)R_{-n+3}+(2n-9)R_{-n+2}-(n-2)R_{-n+1}+(2n-6)R_{-n})+14.\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}R_{-2k}=\left( -1\right) ^{n}(-(n-2)R_{-2n+2}+ (n-3)R_{-2n+1}+ (2n-2)R_{-2n}+R_{-2n-1})+6.\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}R_{-2k+1}=\left( -1\right) ^{n}(-R_{-2n+2}+ nR_{-2n+1}-(n-3)R_{-2n}- (n-2)R_{-2n-1})-7.\)

Taking \(x=-1,\) \(r=2,s=t=u=1\) in Theorem 6 (a) and (b) (or (c)), we obtain the following proposition.

Proposition 6. If \(r=2,s=t=u=1\) then for \(n\geq 1\) we have the following formulas:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-7)W_{-n+3}+(6n-19) W_{-n+2}-(4n-6) W_{-n+1}+(6n-9)W_{-n})-7W_{3}+19W_{2}-6W_{1}+9W_{0}).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-3)W_{-2n+2}+(2n-3) W_{-2n+1}+(8n-10)W_{-2n}+ (2n-5)W_{-2n-1} )+5W_{3}-13W_{2}-2W_{1}+5W_{0}).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k+1}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-3) W_{-2n+2}+(6n-7)W_{-2n+1}-2W_{-2n}-(2n-3)W_{-2n-1})-3W_{3}+3W_{2}+10W_{1}+5W_{0}). \)

From the last proposition, we have the following corollary which gives linear sum formulas of the fourth-order Pell numbers (take \(W_{n}=P_{n}\) with \(P_{0}=0,P_{1}=1,P_{2}=2,P_{3}=5\)).

Corollary 30. For \(n\geq 1,\) fourth-order Pell numbers have the following properties:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}P_{-k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-7)P_{-n+3}+(6n-19) P_{-n+2}-(4n-6) P_{-n+1}+(6n-9)P_{-n})-3).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}P_{-2k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-3)P_{-2n+2}+(2n-3) P_{-2n+1}+(8n-10)P_{-2n}+ (2n-5)P_{-2n-1} )-3).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}P_{-2k+1}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-3) P_{-2n+2}+(6n-7)P_{-2n+1}-2P_{-2n}-(2n-3)P_{-2n-1})+1).\)

Taking \(W_{n}=Q_{n}\) with \(Q_{0}=4,Q_{1}=2,Q_{2}=6,Q_{3}=17\) in the last proposition, we have the following corollary which presents linear sum formulas of the fourth-order Pell-Lucas numbers.

Corollary 31. For \(n\geq 1,\) fourth-order Pell-Lucas numbers have the following properties:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}Q_{-k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-7)Q_{-n+3}+(6n-19) Q_{-n+2}-(4n-6) Q_{-n+1}+(6n-9)Q_{-n})+19).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}Q_{-2k}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-3)Q_{-2n+2}+(2n-3) Q_{-2n+1}+(8n-10)Q_{-2n}+ (2n-5)Q_{-2n-1} )+23).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}Q_{-2k+1}=\frac{1}{4}(\left( -1\right) ^{n}(-(2n-3) Q_{-2n+2}+(6n-7)Q_{-2n+1}-2Q_{-2n}-(2n-3)Q_{-2n-1})+7).\)

Observe that setting \(x=-1,r=1,s=1,t=1,u=2\) (i.e. for the generalized fourth order Jacobsthal case) in Theorem 6 (a),(b),(c) makes the right hand side of the sum formulas to be an indeterminate form. Application of L'Hospital rule however provides the evaluation of the sum formulas.

Taking \(r=1,s=1,t=1,u=2\) in Theorem 6, we obtain the following theorem.

Theorem 8. If \(r=1,s=1,t=1,u=2\) then for \(n\geq 1\) we have the following formulas:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)W_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)W_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)W_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)W_{-n}-39W_{3}+80W_{2}-39W_{1}+62W_{0}).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)W_{-2n+2}+2\left( 5n^{2}-2n-24\right) W_{-2n+1}+(35n^{2}+46n-140)W_{-2n}+2\left( 5n^{2}-2n-24\right) W_{-2n-1})+24W_{3}-93W_{2}+24W_{1}+116W_{0}).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)W_{-2n+2}+(20n^{2}+42n-71)W_{-2n+1}-(n+3)(5n-7)W_{-2n}-2(15n^{2}+4n-69)W_{-2n-1})-69W_{3}+48W_{2}+140W_{1}+48W_{0}). \)

Proof.

  • (a) We use Theorem 6 (a). If we set \( r=1,s=1,t=1,u=2\) in Theorem 6 (a) then we have \begin{equation*} \sum\limits_{k=1}^{n}x^{k}W_{-k}=\frac{g_{8}(x)}{ (x-2)^{2}(x+1)^{2}(x^{2}+1)^{2}} \end{equation*} where

    \(g_{8}(x)=-x^{n+1} (n(-x^{4}+x^{3}+x^{2}+x+2)-x^{2}-2x^{3}+3x^{4}+2)W_{-n+3}-x^{n+1}(4x+2x^{2}+2x^{3}-4x^{4}+2x^{5}+n(x-1)(-x^{4}+x^{3}+x^{2}+x+2)-2)W_{-n+2}+ x^{n+1}\)

    \((4x-6x^{2}-4x^{3}+x^{4}+2x^{5}-x^{6}+n(-x^{2}+x+1)(-x^{4}+x^{3}+x^{2}+x+2)+2)W_{-n+1}+x^{n+1}(4x+6x^{2}-8x^{3}+n(-x^{3}+x^{2}+x+1)(-x^{4}+x^{3}+x^{2}+x+2)+2)W_{-n}\)

    \(- x(-3x^{4}+2x^{3}+x^{2}-2)W_{3}+ x(2x^{5}-4x^{4}+2x^{3}+2x^{2}+4x-2)W_{2}- x(-x^{6}+2x^{5}+x^{4}-4x^{3}-6x^{2}+4x+2)W_{1}-2x(-4x^{3}+3x^{2}+2x+1)W_{0}.\)

    For \(x=-1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (twice). Then we get (a) using

    \( \sum\limits_{k=1}^{n}k(-1)^{k}W_{-k} =\left. \frac{\frac{d^{2}}{dx^{2}}\left( g_{8}(x)\right) }{\frac{d^{2}}{dx^{2}}\left( (x-2)^{2}(x+1)^{2}(x^{2}+1)^{2}\right) }\right\vert _{x=-1} =\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)W_{-n+3}\)

    \(+2\left( -1\right) ^{n}(3n+10)(n-4)W_{-n+2} -\left( -1\right) ^{n}(3n^{2}+13n-39)W_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)W_{-n} -39W_{3}+80W_{2}-39W_{1}+62W_{0}). \)

  • (b) We use Theorem 6 (b). If we set \( r=1,s=1,t=1,u=2\) in Theorem 6 (b) then we have \begin{equation*} \sum\limits_{k=1}^{n}x^{k}W_{-2k}=\frac{g_{9}(x)}{(x-1)^{2}(x-4)^{2}(x+1)^{4}} \end{equation*} where

    \(g_{9}(x)=- x^{n+1}(8x+x^{2}+6x^{3}+2x^{4}-2x^{5}+x^{6}+n(-x^{2}+x+2)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n+2}+ \)

    \( x^{n+1}(16x+16x^{2}+12x^{3}-4x^{5}+n(2x+2)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n+1}- \)

    \( x^{n+1}(16x+32x^{2}-10x^{3}-12x^{4}+6x^{5}+n(-x^{3}+3x^{2}+2x-2)(x^{4}-3x^{3}-5x^{2}+3x+4)-8)W_{-2n}+ 2x^{n+1}(8x+8x^{2}+6x^{3}-2x^{5}+n(x+1)(x^{4}-3x^{3}-5x^{2}+3x+4)+4)W_{-2n-1}\)

    \(-x(-2x^{5}+6x^{3}+8x^{2}+8x+4)W_{3}+x(x^{6}-4x^{5}+2x^{4}+12x^{3}+9x^{2}+16x+12)W_{2}-x (-2x^{5}+6x^{3}+8x^{2}+8x+4)W_{1}- 2x(-2x^{5}+6x^{4}+2x^{3}-20x^{2}-12x+2)W_{0}.\)

    For \(x=-1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (four times). Then we get (b) using

    \( \sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k} =\left. \frac{\frac{d^{4}}{dx^{4}}\left( g_{9}(x)\right) }{\frac{d^{4}}{dx^{4}}\left( (x-1)^{2}(x-4)^{2}(x+1)^{4}\right) }\right\vert _{x=-1} =\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)W_{-2n+2}+2\left( 5n^{2}-2n-24\right) W_{-2n+1}\)

    \( +(35n^{2}+46n-140)W_{-2n}+2\left( 5n^{2}-2n-24\right) W_{-2n-1}) +24W_{3}-93W_{2}+24W_{1}+116W_{0}). \)

  • (c) We use Theorem 6 (c). If we set \( r=1,s=1,t=1,u=2\) in Theorem 6 (c) then we have \begin{equation*} \sum\limits_{k=1}^{n}x^{k}W_{-2k+1}=\frac{g_{10}(x)}{(x-1)^{2}(x-4)^{2}(x+1)^{4}} \end{equation*} where

    \(g_{10}(x)= x^{n+2}(15x+6x^{2}-2x^{3}-2x^{4}-x^{5}+n(x+1)(x^{4}-3x^{3}-5x^{2}+3x+4)+8) W_{-2n+2}- x^{n+2}(33x-4x^{2}-10x^{3}+4x^{4}+x^{5}+n(-x^{2}+2x+3)(x^{4}-3x^{3}-5x^{2}+3x+4)+24)W_{-2n+1}\)

    \(+x^{n+2}(15x+6x^{2}-2x^{3}-2x^{4}-x^{5}+n(x+1)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n}- 2x^{n+1}(8x+x^{2}+6x^{3}+2x^{4}-2x^{5}+x^{6}+n(-x^{2}+x+2)(x^{4}-3x^{3}-5x^{2}+3x+4)+8)W_{-2n-1}\)

    \(+x (x^{6}-2x^{5}+2x^{4}+6x^{3}+x^{2}+8x+8)W_{3}-x(-4x^{5}+12x^{3}+16x^{2}+16x+8)W_{2}+ x(6x^{5}-12x^{4}-10x^{3}+32x^{2}+16x-8)W_{1}-2x (-2x^{5}+6x^{3}+8x^{2}+8x+4)W_{0}. \)

    For \(x=-1,\) the right hand side of the above sum formula is an indeterminate form. Now, we can use L'Hospital rule (four times). Then we get (c) using

    \(\sum\limits_{k=1}^{n}k{(-1)}^{k}W_{-2k+1}=\left. \frac{\frac{d^{4}}{dx^{4}}\left( g_{10}(x)\right) }{\frac{d^{4}}{dx^{4}}\left((x-1)^{2}(x-4)^{2}(x+1)^{4}\right) }\right\vert _{x=-1} =\frac{1}{100}(\left( -1\right)^{n}(-(n+3)(5n-7)W_{-2n+2}+(20n^{2}+42n-71)W_{-2n+1}\)

    \( -(n+3)(5n-7)W_{-2n}-2(15n^{2}+4n-69)W_{-2n-1}) -69W_{3}+48W_{2}+140W_{1}+48W_{0}). \)

Taking \(W_{n}=J_{n}\) with \(J_{0}=0,J_{1}=1,J_{2}=1,J_{3}=1\) in the last proposition, we have the following corollary which presents linear sum formula of the fourth-order Jacobsthal numbers.

Corollary 32. For \(n\geq 1,\) fourth order Jacobsthal numbers have the following property

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}J_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)J_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)J_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)J_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)J_{-n}+2).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}J_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)J_{-2n+2}+2\left( 5n^{2}-2n-24\right) J_{-2n+1}+(35n^{2}+46n-140)J_{-2n}+2\left( 5n^{2}-2n-24\right) J_{-2n-1})-45).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}J_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)J_{-2n+2}+(20n^{2}+42n-71)J_{-2n+1}-(n+3)(5n-7)J_{-2n}-2(15n^{2}+4n-69)J_{-2n-1})+119). \)

From the last proposition, we have the following corollary which gives linear sum formulas of the fourth order Jacobsthal-Lucas numbers (take \( W_{n}=j_{n}\) with \(j_{0}=2,j_{1}=1,j_{2}=5,j_{3}=10\)).

Corollary 33. For \(n\geq 1,\) fourth order Jacobsthal-Lucas numbers have the following property

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}j_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)j_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)j_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)j_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)j_{-n}+95).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}j_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)j_{-2n+2}+2\left( 5n^{2}-2n-24\right) j_{-2n+1}+(35n^{2}+46n-140)j_{-2n}+2\left( 5n^{2}-2n-24\right) j_{-2n-1})+31).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}j_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)j_{-2n+2}+(20n^{2}+42n-71)j_{-2n+1}-(n+3)(5n-7)j_{-2n}-2(15n^{2}+4n-69)j_{-2n-1})-214). \)

Taking \(W_{n}=K_{n}\) with \(K_{0}=3,K_{1}=1,K_{2}=3,K_{3}=10\) in the last proposition, we have the following corollary which presents linear sum formula of the modified fourth order Jacobsthal numbers.

Corollary 34. For \(n\geq 1,\)modified fourth order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}K_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)K_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)K_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)K_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)K_{-n}-3).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}K_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)K_{-2n+2}+2\left( 5n^{2}-2n-24\right) K_{-2n+1}+(35n^{2}+46n-140)K_{-2n}+2\left( 5n^{2}-2n-24\right) K_{-2n-1})+333).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}K_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)K_{-2n+2}+(20n^{2}+42n-71)K_{-2n+1}-(n+3)(5n-7)K_{-2n}-2(15n^{2}+4n-69)K_{-2n-1})-262). \)

From the last proposition, we have the following corollary which gives linear sum formula of the fourth-order Jacobsthal Perrin numbers (take \( W_{n}=Q_{n}\) with \(Q_{0}=3,Q_{1}=0,Q_{2}=2,Q_{3}=8\)).

Corollary 35. For \(n\geq 1,\) fourth-order Jacobsthal Perrin numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}Q_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)Q_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)Q_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)Q_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)Q_{-n}+34).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}Q_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)Q_{-2n+2}+2\left( 5n^{2}-2n-24\right) Q_{-2n+1}+(35n^{2}+46n-140)Q_{-2n}+2\left( 5n^{2}-2n-24\right) Q_{-2n-1})+354).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}Q_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)Q_{-2n+2}+(20n^{2}+42n-71)Q_{-2n+1}-(n+3)(5n-7)Q_{-2n}-2(15n^{2}+4n-69)Q_{-2n-1})-312). \)

Taking \(W_{n}=S_{n}\) with \(S_{0}=0,S_{1}=1,S_{2}=1,S_{3}=2\) in the last proposition, we have the following corollary which presents linear sum formula of adjusted fourth-order Jacobsthal numbers.

Corollary 36. For \(n\geq 1,\) adjusted fourth-order Jacobsthal numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}S_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)S_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)S_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)S_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)S_{-n}-37).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}S_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)S_{-2n+2}+2\left( 5n^{2}-2n-24\right) S_{-2n+1}+(35n^{2}+46n-140)S_{-2n}+2\left( 5n^{2}-2n-24\right) S_{-2n-1})-21).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}S_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)S_{-2n+2}+(20n^{2}+42n-71)S_{-2n+1}-(n+3)(5n-7)S_{-2n}-2(15n^{2}+4n-69)S_{-2n-1})+50). \)

From the last proposition, we have the following corollary which gives linear sum formula of the modified fourth-order Jacobsthal-Lucas numbers (take \(W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\)).

Corollary 37. For \(n\geq 1,\) modified fourth-order Jacobsthal-Lucas numbers have the following property:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}R_{-k}=\frac{1}{36}(-\left( -1\right) ^{n}(3n^{2}-5n-39)R_{-n+3}+2\left( -1\right) ^{n}(3n+10)(n-4)R_{-n+2}-\left( -1\right) ^{n}(3n^{2}+13n-39)R_{-n+1}+2\left( -1\right) ^{n}(3n^{2}+7n-31)R_{-n}+176).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}R_{-2k}=\frac{1}{100}(\left( -1\right) ^{n}(-(15n^{2}+4n-69)R_{-2n+2}+2\left( 5n^{2}-2n-24\right) R_{-2n+1}+(35n^{2}+46n-140)R_{-2n}+2\left( 5n^{2}-2n-24\right) R_{-2n-1})+377).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}R_{-2k+1}=\frac{1}{100}(\left( -1\right) ^{n}(-(n+3)(5n-7)R_{-2n+2}+(20n^{2}+42n-71)R_{-2n+1}-(n+3)(5n-7)R_{-2n}-2(15n^{2}+4n-69)R_{-2n-1})-7). \)

Taking \(x=-1,\) \(r=2,s=3,t=5,u=7\) in Theorem 6 (a), (b) and (c), we obtain the following proposition.

Proposition 7. If \(r=2,s=3,t=5,u=7\) then for \(n\geq 1\) we have the following formulas:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-k}=\frac{1}{4}( \left( -1\right) ^{n}((2n+11)W_{-n+3}-(6n+35) W_{-n+2}+8W_{-n+1}-(10n+63)W_{-n})-11W_{3}+35W_{2}-8W_{1}+63W_{0}).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k}=\frac{1}{36}(\left( -1\right) ^{n}((6n+7)W_{-2n+2}-(6n-5)W_{-2n+1}-36(n+2)W_{-2n}-7(6n+13)W_{-2n-1})+13W_{3}-33W_{2}-44W_{1}+7W_{0}). \)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}W_{-2k+1}=\frac{1}{36}(\left( -1\right) ^{n}((6n+19)W_{-2n+2}-3(6n+17)W_{-2n+1}-4(3n+14)W_{-2n}+7(6n+7)W_{-2n-1})-7W_{3}-5W_{2}+72W_{1}+91W_{0}). \)

From the last proposition, we have the following corollary which gives linear sum formulas of 4-primes numbers (take \(W_{n}=G_{n}\) with \( G_{0}=0,G_{1}=0,G_{2}=1,G_{3}=2\)).

Corollary 38. For \(n\geq 1,\) 4-primes numbers have the following properties:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}G_{-k}=\frac{1}{4}( \left( -1\right) ^{n}((2n+11)G_{-n+3}-(6n+35) G_{-n+2}+8G_{-n+1}-(10n+63)G_{-n})+13).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}G_{-2k}=\frac{1}{36}(\left( -1\right) ^{n}((6n+7)G_{-2n+2}-(6n-5)G_{-2n+1}-36(n+2)G_{-2n}-7(6n+13)G_{-2n-1})-7).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}G_{-2k+1}=\frac{1}{36}(\left( -1\right) ^{n}((6n+19)G_{-2n+2}-3(6n+17)G_{-2n+1}-4(3n+14)G_{-2n}+7(6n+7)G_{-2n-1})-19). \)

Taking \(G_{n}=H_{n}\) with \(H_{0}=4,H_{1}=2,H_{2}=10,H_{3}=41\) in the last proposition, we have the following corollary which presents linear sum formulas of Lucas 4-primes numbers.

Corollary 39. For \(n\geq 1,\) Lucas 4-primes numbers have the following properties:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}H_{-k}=\frac{1}{4}( \left( -1\right) ^{n}((2n+11)H_{-n+3}-(6n+35) H_{-n+2}+8H_{-n+1}-(10n+63)H_{-n})+135).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}H_{-2k}=\frac{1}{36}(\left( -1\right) ^{n}((6n+7)H_{-2n+2}-(6n-5)H_{-2n+1}-36(n+2)H_{-2n}-7(6n+13)H_{-2n-1})+143).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}H_{-2k+1}=\frac{1}{36}(\left( -1\right) ^{n}((6n+19)H_{-2n+2}-3(6n+17)H_{-2n+1}-4(3n+14)H_{-2n}+7(6n+7)H_{-2n-1})+171). \)

From the last proposition, we have the following corollary which gives linear sum formulas of modified 4-primes numbers (take \(H_{n}=E_{n}\) with \( E_{0}=0,E_{1}=0,E_{2}=1,E_{3}=1\)).

Corollary 40. For \(n\geq 1,\) modified 4-primes numbers have the following properties:

  • (a) \(\sum\limits_{k=1}^{n}k(-1)^{k}E_{-k}=\frac{1}{4}( \left( -1\right) ^{n}((2n+11)E_{-n+3}-(6n+35) E_{-n+2}+8E_{-n+1}-(10n+63)E_{-n})+24).\)
  • (b) \(\sum\limits_{k=1}^{n}k(-1)^{k}E_{-2k}=\frac{1}{36}(\left( -1\right) ^{n}((6n+7)E_{-2n+2}-(6n-5)E_{-2n+1}-36(n+2)E_{-2n}-7(6n+13)E_{-2n-1})-20).\)
  • (c) \(\sum\limits_{k=1}^{n}k(-1)^{k}E_{-2k+1}=\frac{1}{36}(\left( -1\right) ^{n}((6n+19)E_{-2n+2}-3(6n+17)E_{-2n+1}-4(3n+14)E_{-2n}+7(6n+7)E_{-2n-1})-12). \)

5.3. The case \(x=i\)

In this subsection, we consider the special case \(x=i\). Taking \(r=s=t=u=1\) in Theorem 6, we obtain the following proposition.

Proposition 8. If \(r=s=t=u=1\) then for \(n\geq 1\) we have the following formulas:

  • (a) \(\sum\limits_{k=1}^{n}ki^{k}W_{-k}=i^{n}(i(n-(5+2i))W_{-n+3}+ (-1-i)(n-(\frac{9}{2}+\frac{5}{2} i))W_{-n+2}+(1-2i)(n-(4+2i))W_{-n+1}+2(n-(3+i))W_{-n})-(2-5i) W_{3}-(2+7i)W_{2}+(8-6i)W_{1}+(6+2i)W_{0}.\)
  • (b) \(\sum\limits_{k=1}^{n}ki^{k}W_{-2k}=\frac{1}{9+40i}(i^{n}((13-6i)(n-( \frac{8}{41}+\frac{10}{41}i))W_{-2n+2}+(-14-3i)(n-(\frac{81}{205}-\frac{32}{ 205}i))W_{-2n+1}\)

    \(+(-6+28i)(n+(\frac{83}{205}-\frac{91}{205}i)) W_{-2n}+(-9+i)(n-(\frac{57}{82}-\frac{21}{82}i))W_{-2n-1})-(6-3i) W_{3}+(10-i)W_{2}-2iW_{1}-(4+17i)W_{0}).\)

  • (c) \(\sum\limits_{k=1}^{n}ki^{k}W_{-2k+1}=\frac{1}{9+40i}(i^{n}((-1-9i)(n+( \frac{25}{82}+\frac{21}{82}i))W_{-2n+2}\)

    \(+(7+22i)W_{-2n+1}(n+(\frac{306}{533}- \frac{48}{533}i))+(4-5i)(n+(\frac{33}{41}-\frac{10}{41}i))W_{-2n}+(13-6i)(n-( \frac{8}{41}+\frac{10}{41}i))W_{-2n-1} )+(4+2i) W_{3}-(6-i)W_{2}-(10+14i)W_{1}-(6-3i)W_{0}).\)

From the above proposition, we have the following corollary which gives linear sum formulas of Tetranacci numbers (take \(W_{n}=M_{n}\) with \( M_{0}=0,M_{1}=1,M_{2}=1,M_{3}=2\)).

Corollary 41. For \(n\geq 1,\) Tetranacci numbers have the following properties.

  • (a) \(\sum\limits_{k=1}^{n}ki^{k}M_{-k}=i^{n}(i(n-(5+2i))M_{-n+3}+ (-1-i)(n-(\frac{9}{2}+\frac{5}{2} i))M_{-n+2}+(1-2i)(n-(4+2i))M_{-n+1}+2(n-(3+i))M_{-n})+(2-3i).\)
  • (b) \(\sum\limits_{k=1}^{n}ki^{k}M_{-2k}=\frac{1}{9+40i}(i^{n}((13-6i)(n-( \frac{8}{41}+\frac{10}{41}i))M_{-2n+2}+(-14-3i)(n-(\frac{81}{205}-\frac{32}{ 205}i))M_{-2n+1}+(-6+28i)(n+(\frac{83}{205}-\frac{91}{205}i)) M_{-2n}+(-9+i)(n-(\frac{57}{82}-\frac{21}{82}i))M_{-2n-1})+(-2+3i)).\)
  • (c) \(\sum\limits_{k=1}^{n}ki^{k}M_{-2k+1}=\frac{1}{9+40i}(i^{n}((-1-9i)(n+( \frac{25}{82}+\frac{21}{82}i))M_{-2n+2}+(7+22i)M_{-2n+1}(n+(\frac{306}{533}- \frac{48}{533}i))+(4-5i)(n+(\frac{33}{41}-\frac{10}{41}i))M_{-2n}+(13-6i)(n-( \frac{8}{41}+\frac{10}{41}i))M_{-2n-1} )+(-8-9i)).\)

Taking \(W_{n}=R_{n}\) with \(R_{0}=4,R_{1}=1,R_{2}=3,R_{3}=7\) in the above proposition, we have the following corollary which presents linear sum formulas of Tetranacci-Lucas numbers.

Corollary 42. For \(n\geq 1,\) Tetranacci-Lucas numbers have the following properties.

  • (a) \(\sum\limits_{k=1}^{n}ki^{k}R_{-k}=i^{n}(i(n-(5+2i))R_{-n+3}+ (-1-i)(n-(\frac{9}{2}+\frac{5}{2} i))R_{-n+2}+(1-2i)(n-(4+2i))R_{-n+1}+2(n-(3+i))R_{-n})+(12+16i).\)
  • (b) \(\sum\limits_{k=1}^{n}ki^{k}R_{-2k}=\frac{1}{9+40i}(i^{n}((13-6i)(n-( \frac{8}{41}+\frac{10}{41}i))R_{-2n+2}+(-14-3i)(n-(\frac{81}{205}-\frac{32}{ 205}i))R_{-2n+1}+(-6+28i)(n+(\frac{83}{205}-\frac{91}{205}i)) R_{-2n}+(-9+i)(n-(\frac{57}{82}-\frac{21}{82}i))R_{-2n-1})+(-28-52i)).\)
  • (c) \(\sum\limits_{k=1}^{n}ki^{k}R_{-2k+1}=\frac{1}{9+40i}(i^{n}((-1-9i)(n+( \frac{25}{82}+\frac{21}{82}i))R_{-2n+2}+(7+22i)R_{-2n+1}(n+(\frac{306}{533}- \frac{48}{533}i))+(4-5i)(n+(\frac{33}{41}-\frac{10}{41}i))R_{-2n}+(13-6i)(n-( \frac{8}{41}+\frac{10}{41}i))R_{-2n-1} )+(-24+15i)).\)

Corresponding sums of the other fourth order generalized Tetranacci numbers can be calculated similarly.

Author Contributions:

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest:

The authors declare no conflict of interest.

References

  1. S Hathiwala, G., & V Shah, D. (2019). Binet-type formula for the sequence of Tetranacci numbers by alternate methods. Mathematical Journal of Interdisciplinary Sciences, 6(1), 37-48. [Google Scholor]
  2. Melham, R. S. (1999). Some analogs of the identity \(F_{n}^{2}+F_{n+1}^{2}=F_{2n+1}^{2}\). The Fibonacci Quarterly, 37(4), 305-311. [Google Scholor]
  3. Natividad, L. R. (2013). On solving Fibonacci-like sequences of fourth, fifth and sixth order. International Journal of Mathematics and Computing, 3(2), 38-40. [Google Scholor]
  4. Singh, B., Bhadouria, P., Sikhwal, O., & Sisodiya, K. (2014). A formula for Tetranacci-like sequence. General Mathematics Notes, 20(2), 136-141. [Google Scholor]
  5. Soykan, Y. (2021). Properties of Generalized \((r,s,t,u)\)-Numbers. Earthline Journal of Mathematical Sciences, 5(2), 297-327. [Google Scholor]
  6. Waddill, M. E. (1992). The Tetranacci sequence and generalizations. The Fibonacci Quarterly, 30(1), 9-20.[Google Scholor]
  7. Sloane, N. J. A. (2003). The On-Line Encyclopedia of Integer Sequences. Notices of the AMS, 50(8), 912-915. [Google Scholor]
  8. Soykan, Y. (2019). Gaussian generalized Tetranacci numbers. Journal of Advances in Mathematics and Computer Science, 31(3), 1-21. [Google Scholor]
  9. Soykan, Y. (2019). A study of generalized fourth-order Pell sequences. Journal of Scientific Research and Reports, 25(1-2), 1-18. [Google Scholor]
  10. Soykan, Y., & Polatli, E. E. (2022). A study on generalized fourth-order jacobsthal sequences. International Journal of Advances in Applied Mathematics and Mechanics, 9(4), 34-50. [Google Scholor]
  11. Soykan, Y. (2020). On generalized 4-primes numbers. International Journal of Advances in Applied Mathematics and Mechanics, 7(4), 20-33. [Google Scholor]
  12. Akbulak, M., & Öteles, A. (2014). On the sum of Pell and Jacobsthal numbers by matrix method. Bulletin of the Iranian Mathematical Society, 40(4), 1017-1025. [Google Scholor]
  13. Gökbas, H., & Köse, H. (2017). Some sum formulas for products of Pell and Pell-Lucas numbers. International Journal of Advances in Applied Mathematics and Mechanics, 4(4), 1-4. [Google Scholor]
  14. Koshy, T. (2019). Fibonacci and Lucas Numbers with Applications. A Wiley-Interscience Publication, New York. [Google Scholor]
  15. Koshy, T. (2014). Pell and Pell-Lucas Numbers with Applications (Vol. 431). New York: Springer. [Google Scholor]
  16. Öteles, A., & Akbulak, M. (2016). A note on generalized \(k\)-pell numbers and their determinantal representation. Journal of Analysis and Number Theory, 4(2), 153-158. [Google Scholor]
  17. Hansen, R. T. (1978). General identities for linear Fibonacci and Lucas summations. The Fibonacci Quarterly, 16(2), 121-28. [Google Scholor]
  18. Soykan, Y. (2019). On summing formulas for generalized Fibonacci and Gaussian generalized Fibonacci numbers. Advances in Research, 20(2), 1-15. [Google Scholor]
  19. Soykan, Y. (2020). Corrigendum: On Summing Formulas for Generalized Fibonacci and Gaussian Generalized Fibonacci Numbers. Advances in Research, 21(10), 66-82. [Google Scholor]
  20. Soykan, Y. (2020). On summing formulas for Horadam numbers. Asian Journal of Advanced Research and Reports, 8(1), 45-61. [Google Scholor]
  21. Soykan, Y. (2020). Generalized fibonacci numbers: Sum formulas. Journal of Advances in Mathematics and Computer Science, 35(1), 89-104. [Google Scholor]
  22. Soykan, Y. (2020). Generalized Tribonacci numbers: summing formulas. International Journal of Advances in Applied Mathematics and Mechanics, 7(3), 57-76. [Google Scholor]
  23. Soykan, Y. (2020). On sum formulas for generalized Tribonacci sequence. Journal of Scientific Research & Reports, 26(7), 27-52. [Google Scholor]
  24. Cook, C. K., & Bacon, M. R. (2013, January). Some identities for Jacobsthal and Jacobsthal-Lucas numbers satisfying higher order recurrence relations. Annales Mathematicae et Informaticae, 41, 27-39. [Google Scholor]
  25. Frontczak, R. (2018). Sums of tribonacci and tribonacci-lucas numbers. International Journal of Mathematical Analysis, 12(1), 19-24. [Google Scholor]
  26. Parpar, T. (2011). k’ncı Mertebeden Rekürans Bagıntısının Özellikleri ve Bazı Uygulamaları, Selçuk Üniversitesi. Fen Bilimleri Enstitüsü, Yüksek Lisans Tezi.[Google Scholor]
  27. Soykan, Y. (2020). Summing formulas for generalized Tribonacci numbers. Universal Journal of Mathematics and Applications, 3(1), 1-11. [Google Scholor]
  28. Soykan, Y. (2019). Summation formulas for generalized Tetranacci numbers. Asian Journal of Advanced Research and Reports, 7(2), 1-12. [Google Scholor]
  29. Soykan, Y. (2018). Matrix sequences of tribonacci and tribonacci-Lucas numbers. Communications in Mathematics and Applications, 11(2), 281-295. [Google Scholor]
  30. Soykan, Y. (2019). Sum formulas for generalized fifth-order linear recurrence sequences. Journal of Advances in Mathematics and Computer Science, 34(5), 1-14. [Google Scholor]
  31. Soykan, Y. (2019). On generalized Pentanacci and Gaussian generalized Pentanacci numbers. Journal of Advanced in Mathematics and Computer Science, 33(3), 1-14. [Google Scholor]
  32. Soykan, Y. (2019). On summing formulas of generalized Hexanacci and Gaussian generalized Hexanacci numbers. Asian Research Journal of Mathematics, 14(4), 1-14. [Google Scholor]
  33. Soykan, Y. (2020). A study on sum formulas of generalized sixth-order linear recurrence sequences. Asian Journal of Advanced Research and Reports, 14(2), 36-48. [Google Scholor]
  34. Soykan, Y. (2021). Sum Formulas For Generalized Tetranacci Numbers: Closed Forms of the Sum Formulas \(\sum\limits_{k=0}^{n}x^{k}W_{k}\) and \(\sum\limits_{k=1}^{n}x^{k}W_{-k}\), Journal of Progressive Research in Mathematics, 18(1), 24-47. [Google Scholor]
]]>