ODAM – Vol 2 – Issue 3 (2019) – PISRT https://old.pisrt.org Wed, 15 Jan 2020 07:20:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7 On graceful difference labelings of disjoint unions of circuits https://old.pisrt.org/psr-press/journals/odam-vol-2-issue-3-2019/on-graceful-difference-labelings-of-disjoint-unions-of-circuits/ Sat, 30 Nov 2019 11:24:28 +0000 https://old.pisrt.org/?p=3502
ODAM-Vol. 2 (2019), Issue 3, pp. 38 – 55 Open Access Full-Text PDF
Alain Hertz, Christophe Picouleau
Abstract: A graceful difference labeling (gdl for short) of a directed graph \(G\) with vertex set \(V\) is a bijection \(f:V\rightarrow\{1,\ldots,\vert V\vert\}\) such that, when each arc \(uv\) is assigned the difference label \(f(v)-f(u)\), the resulting arc labels are distinct. We conjecture that all disjoint unions of circuits have a gdl, except in two particular cases. We prove partial results which support this conjecture.
]]>

Open Journal of Discrete Applied Mathematics

On graceful difference labelings of disjoint unions of circuits

Alain Hertz\(^1\), Christophe Picouleau
Department of Mathematics and Industrial Engineering, Polytechnique Montréal and GERAD, Montréal, Canada.; (A.H)
CEDRIC, Conservatoire National des arts et métiers, Paris France.; (C.P)
\(^{1}\)Corresponding Author: alain.hertz@gerad.ca

Abstract

A graceful difference labeling (gdl for short) of a directed graph \(G\) with vertex set \(V\) is a bijection \(f:V\rightarrow\{1,\ldots,\vert V\vert\}\) such that, when each arc $uv$ is assigned the difference label \(f(v)-f(u)\), the resulting arc labels are distinct. We conjecture that all disjoint unions of circuits have a gdl, except in two particular cases. We prove partial results which support this conjecture.

Keywords:

Graceful labelings, directed graphs, disjoint unions of circuits.

1. Introduction

A graph labeling is the assignment of labels, traditionally represented by integers, to the vertices or edges, or both, of a graph, subject to certain conditions. As mentioned in the survey by Gallian [1], more than one thousand papers are devoted to this subject. Among all variations, the most popular and studied graph labelings are the \(\beta\)-valuations introduced by Rosa in 1966 [2], and later called graceful labelings by Golomb [3]. Formally, given a graph \(G\) with vertex set \(V\) and \(q\) edges, a graceful labeling of \(G\) is an injection \(f:V\rightarrow\{0,1,\ldots,q\}\) such that, when each edge \(uv\) is assigned the label \(\vert f(v)-f(u)\vert\), the resulting edge labels are distinct. In other words, the vertices are labeled using integers in \(\{0,1,\ldots,q\}\), and these vertex labels induce an edge labeling from \(1\) to \(q\). The famous Ringel-Kotzig conjecture, also known as the graceful labeling conjecture, hypothesizes that all trees are graceful. It is the focus of many papers and is still open, even for some very restricted graph classes such that trees with 5 leaves, and trees with diameter 6. The survey by Gallian [1] lists several papers dealing with graceful labelings of particular classes of graphs, such that the disjoint union of cliques, the disjoint union of cycles, and the union of cycles with one common vertex.

For a directed graph with vertex set \(V\) and \(q\) edges, a graceful labeling of \(G\) is an injection \(f:V\rightarrow\{0,1,\ldots,q\}\) such that, when each arc (i.e., directed edge) \(uv\) is assigned the label \((f(v)-f(u))\ (mod\ q+1)\), the resulting arc labels are distinct. As mentioned in [1] and [4], most results and conjectures on graceful labelings of directed graphs concern directed cycles, the disjoint union of directed cycles, and the union of directed cycles with one common vertex or one common arc. In particular, it is proved that \(n\overrightarrow{\bf C_3}\), the disjoint union of \(n\) copies of the directed cycle with three vertices, has a graceful labeling only if \(n\) is even. However, it is not known whether this necessary condition is also sufficient.

In this paper, we study graceful difference labelings of directed graphs, which are defined as follows. A graceful difference labeling (gdl for short) of a directed graph \(G=(V,A)\) is a bijection \(f:V\rightarrow\{1,\ldots,\vert V\vert\}\) such that, when each arc \(uv\) is assigned the difference label \(f(v)-f(u)\), the resulting arc labels are distinct. The absolute value \(|f(v)-f(u)|\) is called the magnitude of arc \(uv\), while \(f(v)\) is the vertex label of \(v\). Note that in a gdl of \(G\), two arcs \(uv\) and \(u'v'\) may have the same magnitude \(|f(v)-f(u)|=|f(v')-f(u')|\) but their difference labels must then be opposite, i.e., \(f(v)-f(u)=-(f(v')-f(u'))\).

Given two graphs \(G_i=(V_i,A_i)\) and \(G_j=(V_j,A_j)\) with \(V_i\cap V_j=\emptyset,\) their disjoint union, denoted \(G_i+G_j\), is the graph with vertex set \(V_i\cup V_j\) and arc set \(A_i\cup A_j\). By \(pG\) we denote the disjoint union of \(p\) copies of \(G\). For \(k\ge 2\) we denote by \(\overrightarrow{\bf C_k}\) a circuit on \(k\) vertices isomorphic to the directed graph with vertex set \(V=\{v_1,\ldots, v_k\}\) and arc set \(A=\{v_iv_{i+1}:\ 1\le i< k\}\cup\{v_kv_1\}\). The circuit \(\overrightarrow{\bf C_3}\) is also called a directed triangle, or simply a triangle. For all graph theoretical terms not defined here the reader is referred to [5].

Not every directed graph has a gdl. Indeed, a necessary condition for \(G=(V,A)\) to have a gdl is \(\vert A\vert\le2(\vert V\vert-1)\). Nevertheless this condition is not sufficient since, for example, \(\overrightarrow{\bf C_3}\) has no gdl. Indeed, all bijections \(f:V\rightarrow\{1,2,3\}\) induce two difference labels equal to 1, or two equal to -1. As a second example, \(\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\) has no gdl. Indeed:

  • If the two arcs of \(\overrightarrow{\bf C_2}\) have a magnitude equal to 1, 2, or 3, then \(\overrightarrow{\bf C_3}\) also has an arc with the same magnitude, which means that two arcs in \(\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\) have the same difference label;
  • If the magnitude of two arcs of \(\overrightarrow{\bf C_2}\) is equal to 4, then two difference labels in \(\overrightarrow{\bf C_3}\) are equal to 1 or to -1.

We conjecture that all disjoint unions of circuits have a gdl, except for the two cases mentioned above. We were not able to prove this conjecture, but give partial results on it. In particular, we show that \(n\overrightarrow{\bf C_3}\) has a gdl if and only if \(n\geq 2\).

2. Partial proof of the conjecture

We are interested in determining which disjoint unions of circuits have a gdl. As already mentioned in the previous section, \(\overrightarrow{\bf C_3}\) and \(\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\) have no gdl. We conjecture that these two graphs are the only two exceptions. As first result, we show that if \(G\) is a circuit of length \(k=2\) or \(k\geq 4\), then \(G\) has a gdl. We next prove that if \(G\) has a gdl, and if \(G'\) is obtained by adding to \(G\) a circuit of even length \(k=2\) or \(k\geq6\), or two disjoint circuits of length 4, then \(G'\) also has a gdl. We also show that the disjoint union of \(\overrightarrow{\bf C_4}\) with a circuit of odd length has a gdl. All together, these results prove that if \(G\) is the disjoint union of circuits, among which at most one has an odd length, then \(G\) has a gdl, unless \(G=\overrightarrow{\bf C_3}\) or \(G=\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\).

We next show that the disjoint union of \(n\geq 2\) circuits of length 3 has a gdl, and this is also the case if a \(\overrightarrow{\bf C_4}\) is added to \(n\overrightarrow{\bf C_3}\). Hence, if \(G\) is the union of disjoint circuits with no odd circuit of length \(k\geq 5\), then \(G\) has a gdl, unless \(G=\overrightarrow{\bf C_3}\) or \(G=\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\). In order to prove the above stated conjecture, it will thus remain to show that if \(G\) is the disjoint union of circuits with at least two odd circuits, among which at least one has length \(k\geq 5\), then \(G\) has a gdl.

Our first lemma shows that all circuits have a gdl, except \(\overrightarrow{\bf C_3}\).

Lemma 1. The circuit \(\overrightarrow{\bf C_k}\) with \(k=2\) or \(k\geq 4\) has a gdl. Moreover, if \(k\geq 5\), then \(\overrightarrow{\bf C_k}\) has a gdl with exactly one arc of magnitude 1.

Proof. Clearly, \(\overrightarrow{\bf C_2}\) has a gdl since the two bijections \(f\ : \ V\rightarrow\{1,2\}\) have \(1\) and \(-1\) as difference labels. So assume \(k\geq 4\). We distinguish four cases, according to the value of \(k\mod4\):

  • If \(k=4p,p\ge 1\), we consider the following vertex labels:
    • \(f(v_{2i+1})=i+1, 0\le i\le 2p-2\);
    • \(f(v_{2i})=4p+1-i, 1\le i\le 2p-2\);
    • \(f(v_{4p-2})=2p+1\), \(f(v_{4p-1})=2p+2\), \(f(v_{4p})=2p\).
    Clearly, \(f\) is a bijection between \(\{v_1,\ldots,v_k\}\) and \(\{1,\ldots,k\}\) with the following difference labels:
    • \(f(v_{i+1})-f(v_i)=(-1)^{i+1}(4p-i),1\le i\le 4p-4\);
    • \(f(v_{4p-2})\!-\!f(v_{4p-3})\!=\!2\), \(f(v_{4p-1})\!-\!f(v_{4p-2})\!=\!1\), \(f(v_{4p})\!-\!f(v_{4p-1})\!=\!-2\), \(f(v_{1})\!-\!f(v_{4p})\!=\!-2p+1\).
    All magnitudes are distinct, except in three cases:
    • \(f(v_{4p-2})-f(v_{4p-3})=2\) and \(f(v_{4p})-f(v_{4p-1})=-2\);
    • for \(p\geq 3\), \(f(v_{2p+2})-f(v_{2p+1})=2p-1\) and \(f(v_{1})-f(v_{4p})=-(2p-1)\);
    • for \(p=1\), \(f(v_{4p-1})-f(v_{4p-2})=1\) and \(f(v_{1})-f(v_{4p})=-1\).
    Hence, \(f\) is a gdl, and there is exactly one arc of magnitude 1 when \(p\geq 2\).
  • If \(k=4p+1,p\ge 1\), we consider the following vertex labels:
    • \(f(v_{2i+1})=i+1, 0\le i\le 2p\);
    • \(f(v_{2i})=4p+2-i, 1\le i\le 2p\).
    Again, \(f\) is a bijection between \(\{v_1,\ldots,v_k\}\) and \(\{1,\ldots,k\}\) with the following difference labels:
    • \(f(v_{i+1})-f(v_i)=(-1)^{i+1}(4p+1-i),1\le i\le 4p\);
    • \(f(v_{1})-f(v_{4p+1})=-2p\).
    All magnitudes are distinct, except for one pair of arcs : \(f(v_{2p+2})-f(v_{2p+1})=2p\) and \(f(v_{1})-f(v_{4p+1})=-2p\). Hence, \(f\) is a gdl with exactly one arc of magnitude 1.
  • If \(k=4p+2,p\ge 0\), we consider the following vertex labels:
    • \(f(v_{2i+1})=i+1, 0\le i\le 2p\);
    • \(f(v_{2i})=4p+3-i, 1\le i\le 2p+1\).
    Here also, \(f\) is a bijection between \(\{v_1,\ldots,v_k\}\) and \(\{1,\ldots,k\}\) with the following difference labels:
    • \(f(v_{i+1})-f(v_i)=(-1)^{i+1}(4p+2-i),1\le i\le 4p+1\);
    • \(f(v_{1})-f(v_{4p+2})=-2p-1\).
    There are only two equal magnitudes : \(f(v_{2p+2})-f(v_{2p+1})=2p+1\) and \(f(v_{1})-f(v_{4p+2})=-(2p+1)\). Hence, \(f\) is a gdl with exactly one arc of magnitude 1 when \(p\geq 1\).
  • If \(k=4p+3,p\ge 1\), we consider the following vertex labels:
    • \(f(v_{2i+1})=i+1, 0\le i\le 2p-1\);
    • \(f(v_{2i})=4p+4-i, 1\le i\le 2p\);
    • \(f(v_{4p+1})=2p+2\), \(f(v_{4p+2})=2p+1\), \(f(v_{4p+3})=2p+3\).
    For this last case, \(f\) is a bijection between \(\{v_1,\ldots,v_k\}\) and \(\{1,\ldots,k\}\) with the following difference labels:
    • \(f(v_{i+1})-f(v_i)=(-1)^{i+1}(4p+3-i),1\le i\le 4p-1\);
    • \(f(v_{4p+1})\!-\!f(v_{4p})\!=\!-2\), \(f(v_{4p+2})\!-\!f(v_{4p+1})\!=\!-1\), \(f(v_{4p+3})\!-\!f(v_{4p+2})\!=\!2\), \(f(v_{1})\!-\!f(v_{4p+3})\!=\!-(2p+2)\).
    All magnitudes are distinct, except in two cases:
    • \(f(v_{4p-2})-f(v_{4p-3})=2\) and \(f(v_{4p})-f(v_{4p-1})=-2\);
    • \(f(v_{2p+2})-f(v_{2p+1})=2p+2\) and \(f(v_{1})-f(v_{4p+3})=-(2p+2)\).
    Hence, \(f\) is a gdl with exactly one arc of magnitude 1.

We now show how to add two circuits of length 4, or one even circuit of length \(k\geq 6\) to a graph that has a gdl.

Lemma 2. If a graph \(G\) has a gdl, then \(G+2\overrightarrow{\bf C_{4}}\) also has a gdl.

Proof. Let \(\{v_1,v_2,v_3,v_4\}\) be the vertex set of the first \(\overrightarrow {\bf C_{4}}\), and let \(\{v_1v_2,v_2v_3,v_3v_4,v_4v_1\}\) be its arc set. Also, let \(\{v_5,v_6,v_7,v_8\}\) be the vertex set of the second \(\overrightarrow{\bf C_{4}}\), and let \(\{v_5v_6,v_6v_7,v_7v_8,v_8v_5\}\) be its arc set. Suppose \(G=(V,A)\) has a gdl \(f\). Define \(f'(v)=f(v)+4\) for all \(v\in V\) as well as \(f'(v_1)=1, f'(v_2)=\vert V\vert+8, f'(v_3)=2, f'(v_4)=\vert V\vert+6, f'(v_5)=3, f'(v_6)=\vert V\vert+5, f'(v_7)=4,\) and \(f'(v_8)=\vert V\vert+7\). Clearly, \(f'\) is a bijection between \(V\cup\{v_1,\ldots,v_8\}\) and \(\{1,\ldots,\vert V\vert +8\}\). Moreover, the difference labels on the arcs of the two circuits are \(f'(v_2)-f'(v_1)=\vert V\vert+7, f'(v_3)-f'(v_2)=-(\vert V\vert+6), f'(v_4)-f'(v_3)=\vert V\vert+4, f'(v_1)-f'(v_4)=-(\vert V\vert+5), f'(v_6)-f'(v_5)=\vert V\vert+2, f'(v_7)-f'(v_6)=-(\vert V\vert+1), f'(v_8)-f'(v_7)=\vert V\vert+3,\) and \(f'(v_5)-f'(v_8)=-(\vert V\vert+4)\). Since all magnitudes in \(G\) are at most equal to \(\vert V\vert-1\), \(f'\) is a gdl for \(G+2\overrightarrow{\bf C_{4}}\).

Note that in the proof of Lemma 2, \(G\) can be the empty graph \(G\) with no vertex and no arc. Hence \(2\overrightarrow{\bf C_{4}}\) has a gdl.

Lemma 3. If a graph \(G\) has a gdl, then \(G+\overrightarrow{\bf C_{2k}}\) also has a gdl for \(k\ge 1,k\ne 2\).

Proof. Suppose \(G=(V,A)\) has a gdl \(f\), and let \(\{v_1,\ldots,v_{2k}\}\) be the vertex set and \(\{v_1v_2,\ldots,v_{2k-1}v_{2k},v_{2k}v_1\}\) be the arc set of \(\overrightarrow{\bf C_{2k}}\). We consider two case.

  • If \(k\) is odd, then define \(f'(v)=f(v)+k\) for all \(v\in V\), as well as \(f'(v_{2i-1})=k-i+1\) and \(f'(v_{2i})=\vert V\vert+k+i\) for \(1\le i\le k\). Clearly, \(f'\) is a bijection between \(V\cup\{v_1,\ldots,v_{2k}\}\) and \(\{1,\ldots,\vert V\vert +2k\}\). Moreover, the magnitudes on \(\overrightarrow{\bf C_{2k}}\) are all striclty larger than \(\vert V\vert\) and all different, except in one case : \(f'(v_{k+1})-f'(v_k)=\vert V\vert+k\) and \(f'(v_{1})-f'(v_{2k})=-(\vert V\vert+k)\). Since all magnitudes in \(G\) are strictly smaller than \(\vert V\vert\), \(f'\) is a gdl for \(G+\overrightarrow{\bf C_{2k}}\).
  • If \(k\) is even and at least equal to \(4\), then set \(f'(v)=f(v)+k\) for all \(v\in V\), and define the vertex labels on \(\overrightarrow{\bf C_{2k}}\) as follows:
    • \(f'(v_{2i-1})=k-i+1\) for \(1\le i\le k\);
    • \(f'(v_{2i})=\vert V\vert+k+i\) for \(1\le i\le k-3\);
    • \(f'(v_{2k-4})=\vert V\vert+2k, f'(v_{2k-2})=\vert V\vert+2k-2, f'(v_{2k})=\vert V\vert+2k-1\).
    \(f'\) is bijection between \(V\cup\{v_1,\ldots,v_{2k}\}\) and \(\{1,\ldots,\vert V\vert +2k\}\), and all magnitudes on \(\overrightarrow{\bf C_{2k}}\) are strictly larger than \(\vert V\vert\). Moreover, all magnitudes on \(\overrightarrow{\bf C_{2k}}\) are different, except in two cases :
    • \(f'(v_{k})-f'(v_{k-1})=\vert V\vert+k-1\) and \(f'(v_{1})-f'(v_{2k})=-(\vert V\vert+k-1)\);
    • \(f'(v_{2k-4})-f'(v_{2k-5})=\vert V\vert+2k-3\) and \(f'(v_{2k-1})-f'(v_{2k-2})=-(\vert V\vert+2k-3)\).
    Since all magnitudes in \(G\) are strictly smaller than \(\vert V\vert\), \(f'\) is a gdl for \(G+\overrightarrow{\bf C_{2k}}\).

Since graph \(G\) in the statement of Lemma 2 is possibly empty, it follows from Lemmas 1, 2 and 3 that all disjoint unions of circuits of even length have a gdl. We now consider disjoint unions of circuits among which exactly one has as an odd length. As already observed, \(\overrightarrow{\bf C_3}\) and \(\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\) have no gdl. We show that these are the only two exceptions. According to Lemmas 2 and 3, it is sufficient to prove that \(2\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\), \(\overrightarrow{\bf C_4}+\overrightarrow{\bf C_{2k+1}}\) (\(k\geq 1\)), and \(\overrightarrow{\bf C_{2k}}+\overrightarrow{\bf C_3}\) (\(k\geq 3\)) have a gdl.

Lemma 4. \(2\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\) has a gdl.

Proof. Let \(\{v_1,\ldots,v_7\}\) be the vertex set and \(\{v_1v_2\), \(v_2v_1\), \(v_3v_4\), \(v_4v_3\), \(v_5v_6\), \(v_6v_7\), \(v_7v_5\}\) be the arc set of \(2\overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\). By considering the vertex labels \(f(v_1)=1\), \(f(v_2)=6\), \(f(v_3)=3\), \(f(v_4)=7\), \(f(v_5)=2\), \(f(v_6)=4\) and \(f(v_7)=5\), it is easy to observe that \(f\) is a gdl.

Lemma 5. \(\overrightarrow{\bf C_4}+\overrightarrow{\bf C_{2k+1}}\) has a gdl for every \(k\ge 1\).

Proof. Let \(G=\overrightarrow{\bf C_4}+\overrightarrow{\bf C_{2k+1}}\). We distinguish two cases:

  • If \(k\) is odd, then \(G\) contains \(n=4(\frac{k+1}{2})+3\) vertices. Consider the vertex labels of \(\overrightarrow{\bf C_{n}}\) used in the last case of the proof of Lemma 1, with \(p=\frac{k+1}{2}\), and assume that \(\{v_1,v_{n-2},v_{n-1},v_n\}\) is the vertex set of the \(\overrightarrow{\bf C_4}\) in \(G\), while \(\{v_2,v_{3},\ldots,v_{n-3}\}\) is the vertex set of the \(\overrightarrow{\bf C_{2k+1}}\). It is sufficient to prove that the difference labels on \(v_1v_{n-2}\) and \(v_{n-3}v_2\) do not appear on any other arc of \(G\).
    • \(f(v_{n-2})-f(v_1)=(2p+2)-1=(k+3)-1=k+2\), which is an odd positive number, while all other odd difference labels are negative.
    • \(f(v_2)-f(v_{n-3})=(4p+3)-(2p+4)=2p-1=k\), which is again an odd positive number, different for the other negative odd labels.
  • If \(k\) is even, consider the vertex labels of \(\overrightarrow{\bf C_{2k+4}}\) used in the first case of the proof of Lemma 1 with \(p=\frac{k}{2}+1\geq 2\) (i.e., \(4p=2k+4\)). Also, define \(f(v_{2k+5})=2k+5=4p+1\). Assume that \(\{v_1,v_{2k+2},v_{2k+3},v_{2k+4}\}\) is the vertex set of the \(\overrightarrow{\bf C_4}\) in \(G\), while \(\{v_2,v_{3},\ldots,v_{2k+1},v_{2k+5}\}\) is the vertex set of the \(\overrightarrow{\bf C_{2k+1}}\). It is sufficient to prove that the difference labels on \(v_1v_{2k+2}\), \(v_{2k+5}v_2\), and \(v_{2k+1}v_{2k+5}\) do not appear on any other arc of \(G\).
    • \(f(v_{2k+2})-f(v_1)=(2p+1)-1=(k+3)-1=k+2\), which is an even positive number, while all other even difference labels are negative.
    • \(f(v_2)-f(v_{2k+5})=(4p)-(4p+1)=-1\). Since \(p>1\), the only other arc with magnitude 1 is \(v_{2k+2}v_{2k+3}\) which has a difference label of 1.
    • \(f(v_{2k+5})-f(v_{2k+1})=(4p+1)-(2p-1)=2p+2=k+4\), which is again an even positive number, while all other even difference labels are negative.

Lemma 6. \(\overrightarrow{\bf C_k}+\overrightarrow{\bf C_3}\) has a gdl for every \(k\ge 5\).

Proof. Let \(\{v_1,\ldots,v_{k+3}\}\) be the vertex set and \(\{v_1v_2, \ldots, v_{k-1}v_{k}\), \(v_kv_1\), \(v_{k+1}v_{k+2}\), \(v_{k+2}v_{k+3}\), \(v_{k+3}v_{k+1} \}\) be the arc set of \(G=\overrightarrow{\bf C_k}+\overrightarrow{\bf C_3}\). Consider the gdl \(f\) defined in the proof of Lemma 1 for \(\overrightarrow{\bf C_k}\), and set \(f'(v_i)=f(v_i)+2\) for all \(i=1,\ldots,k\). If the only arc of magnitude 1 has a difference label equal to -1, then define \(f'(v_{k+1})=1\), \(f'(v_{k+2})=2\), and \(f'(v_{k+3})=k+3\), else define \(f'(v_{k+1})=2\), \(f'(v_{k+2})=1\), and \(f'(v_{k+3})=k+3\). Clearly, \(f'\) is a bijection between \(\{v_1,\ldots,v_{k+3}\}\) and \(\{1,\ldots,k+3\}\). To conclude that \(f'\) is a gdl, it is sufficient to prove that the difference labels on \(\overrightarrow{\bf C_3}\) do not appear on \(\overrightarrow{\bf C_k}\).

  • The arc \(v_{k+1}v_{k+2}\) is of magnitude 1, and its difference label has the sign opposite to that of magnitude 1 in \(\overrightarrow{\bf C_k}\);
  • The magnitudes of \(v_{k+2}v_{k+3}\) and \(v_{k+3}v_{k+1}\) are distinct and larger than \(k\), while all magnitudes in \(\overrightarrow{\bf C_k}\) are strictly smaller than \(k\).

All together, the previous lemmas show that if \(G\) be the disjoint union of circuits, among which at most one has an odd length, then \(G\) has a gdl if and only if \(G \neq \overrightarrow{\bf C_3}\) and \(G\neq \overrightarrow{\bf C_2}+\overrightarrow{\bf C_3}\).

We now consider the disjoint union of \(n\) circuits of length 3, and show that these graphs have a gdl for all \(n\geq 2\).

Lemma 7. For every \(n\geq 2\), the graph \(n\overrightarrow{\bf C_3}\) has a gdl with at most one arc of magnitude \(3n-2\), and all other arcs of magnitude strictly smaller than \(3n-2\).

Proof. The graphs in Figures 1, 2, 3, 4, 5, 6, 7 and 8 show the existence of the desired gdl for \(2\leq n \leq 9\).

Figure 1. \(2\overrightarrow{\bf C_3}\).

Figure 2. \(3\overrightarrow{\bf C_3}\)

Figure 3. \(4\overrightarrow{\bf C_3}\).

Figure 4. \(5\overrightarrow{\bf C_3}\).

Figure 5. \(6\overrightarrow{\bf C_3}\).

Figure 6. \(7\overrightarrow{\bf C_3}\).

Figure 7. \(8\overrightarrow{\bf C_3}\).

Figure 8. \(9\overrightarrow{\bf C_3}\).

We now prove the result by induction on \(n\). So, consider the graph \(n\overrightarrow{\bf C_3}\) with \(n\geq 10\), and assume the result is true for less than \(n\) directed triangles. Let \(t\) and \(r\) be two integers such that \(-4\leq r \leq 2\) and \(n=7t+r.\)

We thus have \(t\geq 2\). We will show how to construct a gdl for \(n\overrightarrow{\bf C_3}\) given a gdl for \(t\overrightarrow{\bf C_3}\). We thus have to add \(n-t\) directed triangles to \(t\overrightarrow{\bf C_3}\). For this purpose, define $$\theta=\left\lceil\frac{n-t}{2}\right\rceil=3t+\left\lceil\frac{r}{2}\right\rceil.$$

It follows that \(n-t=2\theta\) if \(r\) is even, and \(n-t=2\theta-1\) if \(r\) is odd. We now prove the lemma by considering the 4 cases A,B,C,D defined in Table 1. \setlength{\doublerulesep}{\arrayrulewidth}

Table 1. Four different cases.
\(n-t\) \(r\) \(\theta\) Case
\(2\theta\) -4 \(3t-2\) A
-2 \(3t-1\)
0 \(3t\)
2 \(3t+1\) B
\(2\theta-1\) -3 \(3t-1\) C
-1 \(3t\)
1 \(3t+1\) D

Case A : \(n=2\theta+t\), \(\theta\in \{3t-2,3t-1,3t\}\) Consider \(2\theta\) directed triangles \(T_1,\ldots,T_{2\theta}\), every \(T_i\) having \(\{v_{3i-2},v_{3i-1},v_{3i}\}\) as vertex set and \(\{v_{3i-2}v_{3i-1}\), \(v_{3i-1}v_{3i}\), \(v_{3i}v_{3i-2}\}\) as arc set. Consider the vertex labels \(f(v_i)\) for \(T_1,\ldots,T_{2\theta}\) shown in Table 2.
Table 2. The labeling of \(T_1,\ldots,T_{2\theta}\) for case A.
Triangle \(T_i\) \(f(v_{3i-2})\) \(f(v_{3i-1})\) \(f(v_{3i})\)
\(T_1\) \(1\) \(2\theta\) \(6\theta+3t-6\)
\(T_2\) \(2\) \(6\theta+3t-3\) \(4\theta+3t-2\)
\(T_3\) \(3\) \(6\theta+3t-4\) \(2\theta+1\)
\(T_4\) \(4\) \(4\theta+3t-1\) \(6\theta+3t-2\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2k-1}\) \(2k-1\) \(2\theta+k\) \(6\theta+3t-2k+2\)
\(T_{2k}\) \(2k\) \(6\theta+3t-2k+1\) \(4\theta+3t-k+1\)
\(k=3,\ldots,\theta-3\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)

Also, let \(f'\) be a gdl for \(t\overrightarrow{\bf C_3}\) with at most one arc of magnitude \(3t-2\), and all other arcs of magnitude strictly smaller than \(3t-2\). Define \(f(v_i)=f'(v_i)+3\theta\) for \(i=6\theta+1,\ldots,6\theta+3t\). One can easily check that \(f\) is a bijection between the vertex set \(\{v_1,\ldots,v_{6\theta+3t}\}\) and \(\{1,\ldots,6\theta+3t=3n\}\).

For each \(T_i\), we define its small difference label (small-dl for short) as the minimum among \(\vert f(v_{3i-1})-f(v_{3i-2})\vert\), \(\vert f(v_{3i})-f(v_{3i-1})\vert\), and \(\vert f(v_{3i-2})-f(v_{3i})\vert\). Similarly, the big difference label (big-dl) of \(T_i\) is the maximum of these three values, and the medium one (medium-dl) is the third value on \(T_i\). Table 3 gives the small, medium and big difference labels of \(T_1,\ldots,T_{2\theta}\). By considering two dummy directed triangles \(D_1\) and \(D_2\), we have grouped the triangles into \(\theta+1\) pairs \(\pi_0,\ldots,\pi_{\theta}\), as shown in Table 3. Two triangles belong to the same pair \(\pi_i\) if their small difference labels have the same magnitude. The difference labels given for \(D_1\) and \(D_2\) are artificial, but are helpful for simplifying the proof.

Table 3. The difference labels of the arcs of \(T_1,\ldots,T_{2\theta},D_1,D_2\) for case A.
Pair Triangle Small-dl Medium-dl Big-dl
\(\pi_0=(T_{1},T_{2})\) \(T_1\) \(2\theta\) \(4\theta+3t-4\) \(-(6\theta+3t-4)\)
\(T_2\)
\(-2\theta\) \(-(4\theta+3t-2)\) \(6\theta+3t-2\)
\(\pi_1=(T_{3},T_{4})\) \(T_3\) \(-(2\theta-1)\) \(-(4\theta+3t-3)\) \(6\theta+3t-4\)
\(T_4\) \(2\theta-1\) \(4\theta+3t-5\) \(-(6\theta+3t-6)\)
\(\pi_2=(D_{1},T_{5})\) \(D_1\) \(-(2\theta-2)\) \(-(4\theta+3t-5)\) \(6\theta+3t-7\)
\(T_5\) \(2\theta-2\) \(4\theta+3t-7\) \(-(6\theta+3t-9)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(\pi_k=(T_{2k},T_{2k+1})\) \(T_{2k}\) \(-(2\theta-k)\) \(-(4\theta+3t-3k+1)\) \(6\theta+3t-4k+1\)
\(k=3,\ldots,\theta-1\) \(T_{2k+1}\) \(\theta-k\) \(4\theta+3t-3k-1\) \(-(6\theta+3t-4k-1)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(\pi_{\theta}=(T_{2\theta},D_{2})\) \(T_{2\theta}\) \(-\theta\) \(-(\theta+3t+1)\) \(2\theta+3t+1\)
\(D_{2}\) \(\theta\) \(\theta+3t-1\) \(-(2\theta+3t-1)\)
Let \(s^1_i\) be the small-dl of the first triangle of \(\pi_i\), and let \(s^2_i\) be the small-dl of the its second triangle. Define \(m^1_i\), \(m^2_i\), \(b^1_i\) and \(b^2_i\) in a similar way for the medium and big difference labels of \(\pi_i\). For example, \(s^1_2=-(2\theta-2)\), \(s^2_2=2\theta-2\), \(m^1_2=-(4\theta+3t-5)\), \(m^2_2=4\theta+3t-7\), \(b^1_2=6\theta+3t-7\), and \(b^2_2=-(6\theta+3t-9)\). Note that \(s^j_i+m^j_i=-b^j_i\) and \(|s^j_i|+|m^j_i|=|b^j_i|\) for all \(i=0,\ldots,\theta\) and \(j=1,2\). The following properties are valid for every \(\pi_i\) with \(2\leq i\leq \theta\):
  • \(s^1_i\), \(m^1_i\) and \(b^2_i\) are negative integers, while \(s^2_i\), \(m^2_i\) and \(b^1_i\) are positive integers;
  • \(s^2_i=-s^1_i\), \(m^2_i=-m^1_i-2\), and \(b^2_i=-b^1_i+2\);
  • if \(i< \theta\), then \(s^1_{i+1}=s^1_i+1\), \(m^1_{i+1}=m^1_i+3\), and \(b^1_{i+1}=b^1_i-4\).
Note that all big difference labels \(b^j_{i}\) have the same parity for \(2\leq i\leq \theta\), \(j=1,2\), while for the medium ones, the parities alternate between successive \(\pi_i\) and \(\pi_{i+1}\). Moreover, the largest magnitude is \(6\theta+3t-2= 3n-2\), and there is exactly one arc with this magnitude. Since \(\theta< 3t+1\), we have \(\theta+3t+1>2\theta\), which means that no medium-dl can be equal to a small-dl, with the exception of \(m^2_{\theta}\) which can be equal to \(2\theta\) or \(2\theta-1\). But we don't care about this exception since \(D_2\) (the second triangle of \(\pi_{\theta}\)) is a dummy triangle. Notice also that the small difference labels in Table 3 are all distinct, which is also the case for the medium and the big ones. Since all difference labels on \(T_{2\theta+1},\ldots,T_{2\theta+t}\) are distinct, we conclude that there are only two possibilities for two arcs \(uv\) and \(u'v'\) of \(n\overrightarrow{\bf C_3}\) to have the same difference label \(f(v)-f(u)=f(v')-f(u')\):
  • One of these arcs belongs to \(T_{2\theta+1},\ldots,T_{2\theta+t}\) and the other to \(T_{1},\ldots,T_{2\theta}\);
  • Both arcs belong to \(T_{1},\ldots,T_{2\theta}\), one having a big-dl, and the other a medium-dl.

Consider the first case. Remember that there is at most one arc on \(T_{2\theta+1},\ldots,T_{2\theta+t}\) with magnitude \(3t-2\), all other arcs having a smaller magnitude. Since at most one arc on \(T_{1},\ldots,T_{2\theta}\) has a magnitude equal to \(\theta\geq 3t-2\), we conclude that such a situation can only occur at most once (with \(\theta=3t-2\)), and we can avoid it by flipping all triangles \(T_{2\theta+1},\ldots,T_{2\theta+t}\).

More precisely, by flipping a directed triangle \(\overrightarrow{C_3}\) with vertex set \(\{x,y,z\}\) and arc set \(\{xy,yz,zx\}\), we mean exchanging the vertex labels of \(y\) and \(z\). Hence, the set of difference labels is modified from \(\{f(y)-f(x), f(z)-f(y), f(x)-f(z)\}\) to \(\{f(z)-f(x), f(y)-f(z), f(x)-f(y2)\}\), which means that each difference label of the original set appears with an opposite sign in the modified set, but with the same magnitude.

Consider the second case, and let \(i\) and \(j\) be such that \(b^x_{i}=m^y_{j}\) for \(x,y\) in \(\{1,2\}\).

Note that \(0\leq j < i \leq \theta\). We say that \(\pi_i\) is conflicting with \(\pi_j\) and we write \(\pi_i\rightarrow \pi_j\). If \(\pi_i\) is not conflicting with \(\pi_j\), we write \(\pi_i\nrightarrow \pi_j\). Note that

\begin{equation}\label{a} \text{if there are } k< j< i \text{ such that }\pi_i\rightarrow \pi_j\rightarrow \pi_k, \text{ then }\pi_k\nrightarrow \pi_{\ell}\text{ for all }\ell< k. \end{equation}
(1)

Indeed, if \(\pi_i\rightarrow \pi_j\rightarrow \pi_k\), then there are \(x,y,z,w\) in \(\{1,2\}\) such that \(b^x_i=m^y_j\) and \(b^z_j=m^w_k\). Then: $$|b^w_k|=|m^w_k|+|s^w_k|=|b^z_j|+|s^w_k|\geq|b^y_j|+|s^w_k|-2=|m^y_j|+|s^y_j|+|s^w_k|-2=|b^x_i|+|s^y_j|+|s^w_k|-2.$$

Since \(|b^x_i|\geq 2\theta+3t+1, |s^w_k |> |s^y_j|> |s^x_i|\geq\theta\), we have \(\min\{|b^1_k|,|b^2_k|\}\geq|b^w_k|-2\geq4\theta+3t\). Hence, \(\pi_k\nrightarrow \pi_{\ell}\) for all \(\ell< k\) since there is no arc with medium magnitude at least equal to \(4\theta+3t\).

We now show how to avoid conflicting pairs \(\pi_i\) and \(\pi_j\) with both \(i\) and \(j\) at least equal to 2. Conflicts involving \(\pi_0\) and \(\pi_1\) (i.e., \(T_{1},\ldots,T_{4}\)) will be handled later. Consider \(i\) and \(j\) such that \(2\leq j< i< \theta\) and \(\pi_i\rightarrow \pi_j\). Since \(b^1_i\) and \(m^2_j\) are positive, while \(b^2_i\) and \(m^1_j\) are negative, we either have \(b^1_i=m^2_j\) or \(b^2_i=m^1_j\). In the first case, we say that \(\pi_i\) is \(12-\)conflicting with \(\pi_j\), while in the second case, we say that \(\pi_i\) is \(21-\)conflicting with \(\pi_j\). Note that

\begin{equation}\label{b} \text{if }\pi_i\text{ is }12-\text{conflicting with }\pi_j\text{, then }\pi_{i-1}\text{ is }21-\text{conflicting with }\pi_j\text{ and }\pi_{i+1}\nrightarrow \pi_j. \end{equation}
(2)
\begin{equation}\label{c} \text{if }\pi_i\text{ is }21-\text{conflicting with }\pi_j\text{, then }\pi_{i+1}\text{ is }12-\text{conflicting with }\pi_j\text{ and }\pi_{i-1}\nrightarrow \pi_j. \end{equation}
(3)

Indeed, if \(\pi_i\) is \(12-\)conflicting with \(\pi_j\), then \(b^1_i=m^2_j\), which implies \(b^2_{i-1}\!=\!-b^1_i\!-\!2\!=\!-m^2_j\!-\!2\!=\!m^1_j\). Since \(\max\{|b^1_{i+1}|,|b^2_{i+1}|\}=|b^1_{i+1}|=b^1_i-4< m^2_j\leq \min\{|m^1_j|,|m^2_j|\}\), we have \(\pi_{i+1}\nrightarrow \pi_j\).

Similarly, if \(\pi_i\) is \(21-\)conflicting with \(\pi_j\), then \(b^2_i=m^1_j\), which implies \(b^1_{i+1}\!=\!-b^2_i\!-\!2\!=\!-m^1_j\!-\!2\!=\!m^2_j\). Moreover, since \(\min\{|b^1_{i-1}|,|b^2_{i-1}|\}=|b^2_{i-1}|=|b^2_i|+4>m^1_j= \max\{|m^1_j|,|m^2_j|\}\), we have \(\pi_{i-1}\nrightarrow \pi_j\). Observe also that:
\begin{equation}\label{d} \text{if }\pi_i\rightarrow \pi_j\text{ for }2\leq j\text{, then }\pi_k\nrightarrow \pi_j\text{ for }2\leq k\neq i,i-1,i+1. \end{equation}
(4)
Indeed, if \(2\leq k< i-1\), then \(\min\{|b^1_k|,|b^2_k|\}\geq\max\{|m^1_j|,|m^2_j|\}+4\), while for \(\theta\geq k>i+1\), we have \(\max\{|b^1_k|,|b^2_k|\}\leq\min\{|m^1_i|,|m^2_i|\}-4.\) In both cases, none of \(m^1_j\) and \(m^2_j\) can be equal to \(b^1_k\) or \(b^2_k\). As next property, note that:
\begin{equation}\label{e} \text{if }\pi_i\rightarrow \pi_j\text{ for }2\leq j\text{, then }\pi_i\nrightarrow \pi_k\text{ for }1\leq k \neq j. \end{equation}
(5)

Indeed, let us first show that \(\pi_i\nrightarrow \pi_{j-1}\). If \(j=2\), then \(m^1_1\!=\!m^1_2\!-\!2\!=\!-\!m^2_2\!-\!4\) and \(m^2_1\!=\!-m^1_2\!=\!m^2_2\!+\!2\). Since we have either \(b^1_i=m^2_2\) and \(b^2_i=-m^2_2+2\), or \(b^2_i=m^1_2\) and \(b^1_i=-m^1_2+2\), we see that \(\pi_i\nrightarrow \pi_1\). For \(j>2\), observe that \(b^1_i,b^2_i,m^1_j,m^2_j\) all have the same parity, while \(m^1_{j-1},m^2_{j-1}\) have the opposite parity. Hence \(\pi_i\nrightarrow \pi_{j-1}\).

Similarly, \(\pi_i\nrightarrow \pi_{j+1}\) for all \(2\leq j \leq \theta-1\) since the parity of \(m^1_{j+1},m^2_{j+1}\) is the opposite of the parity of \(b^1_i,b^2_i\).

Now, let \(x,y\in\{1,2\}\) be such \(b^x_i=m^y_j\). If \(1\leq k< j-1\), then \(\min\{|m^1_k|,|m^2_k|\}\geq\max\{|b^1_i|,|b^2_i|\}+2\), while for \(\theta\geq k>j+1\), \(\max\{|m^1_k|,|m^2_k|\}\leq\min\{|b^1_i|,|b^2_i|\}-2\).

In both cases, none of \(m^1_k\) and \(m^2_k\) can be equal to \(b^1_i\) or \(b^2_i\), which proves that \(\pi_i\nrightarrow \pi_{k}\) for \(k\geq 1, k\neq j-1,j,j+1\).

In what follows, we will remove conflicts by flipping some triangles. More precisely, by flipping \(\pi_i\), we mean flipping both triangles in \(\pi_i\). Note that:
\begin{equation}\label{f} \text{if }\pi_i\rightarrow \pi_j\text{ for }j\geq 2\text{, then }\pi_i\nrightarrow \pi_k\text{ for all }k\geq 2\text{ after the flip of }\pi_i. \end{equation}
(6)
Indeed, if \(\pi_i\) is \(12-\)conflicting with \(\pi_j\), then \(b^1_i=m^2_j\), and there is no triangle with medium-dl equal to \(-b^1_i=-m^2_j=\) or \(-b^2_i=b^1_i-2=m^2_j-2\). Similarly, if \(\pi_i\) is \(21-\)conflicting with \(\pi_j\), then \(b^2_i=m^1_j\), and there is no triangle with medium-dl equal to \(-b^1_i=-b^2_i+2=-m^1_j+2\) or \(-b^2_i=-m^1_j\). Hence, we have \(\pi_i\nrightarrow \pi_k\) for all \(k\geq 2\) after the flip of \(\pi_i\). Also,
\begin{equation}\label{g} \text{if }\pi_i\rightarrow \pi_j\text{ for }j\geq 2\text{, then }\pi_k\nrightarrow \pi_j\text{ for all }k\leq \theta\text{ after the flip of }\pi_j. \end{equation}
(7)

Indeed, if \(\pi_i\) is \(12-\)conflicting with \(\pi_j\), then \(b^1_i=m^2_j\), \(b^2_{i-1}=m^1_j\), and there is no triangle with a big-dl equal to \(-m^1_j=-b^2_{i-1}\) or \(-m^2_j=-b^1_i\). Similarly, if \(\pi_i\) is \(21-\)conflicting with \(\pi_j\), then \(b^2_i=m^1_j\), \(b^1_{i+1}=m^2_j\), and there is no triangle with a big-dl equal to \(-m^1_j=-b^2_{i}\) or \(-m^2_j=-b^1_{i+1}\). Hence, we have \(\pi_k\nrightarrow \pi_j\) for all \(k\leq \theta\) after the flip of \(\pi_j\).

Now, let \(J\) be the set of integers \(j\) such that \(\pi_i\rightarrow \pi_j\rightarrow \pi_k\) for at least one pair \(i,k\) of integers with \(2\leq k< j< i\leq \theta\). Also, let \(J'\) be the set of integers \(j'\) such that there is \(k\geq 2\) and \(j\neq j'\) in \(J\) with \(\pi_j\rightarrow \pi_k\) and \(\pi_{j'}\rightarrow \pi_k\). Note that \(J\cap J'=\emptyset\). Indeed, consider \(j'\in J'\), and \(j\neq j'\) in \(J\) such that \(\pi_j\rightarrow \pi_k\) and \(\pi_{j'}\rightarrow \pi_k\). It follows from (2), (3) and (4) that \(j'\!=\!j\!-\!1\) or \(j'\!=\!j\!+\!1\). Since \(j\in J\), \(m^1_{j}\) and \(m^2_{j}\) have the same parity as the big difference labels on \(T_5,\ldots,T_{2\theta}\), which means that \(m^1_{j'}\) and \(m^2_{j'}\) have the opposite parity. Hence, there is no \(i\) with \(\pi_i\rightarrow \pi_{j'}\), which proves that \(j'\notin J\).

By flipping all \(\pi_{\ell}\) with \(\ell\in J\cup J'\), we get \(\pi_i\nrightarrow \pi_j\) for all \(2\leq j< i\leq \theta\) with \(i\) or \(j\) in \(J\cup J'\). Indeed, it follows from (1) that we cannot have \(\pi_i\rightarrow \pi_j\) with both \(i\) and \(j\) in \(J\cup J'\), since this would imply the existence of \(k,k'\) with \(2\leq k < k'\leq \theta\) and \(\pi_{k'}\rightarrow \pi_i\rightarrow \pi_j\rightarrow \pi_k\). Hence, it follows from (6) and (7) that \(\pi_i\nrightarrow \pi_j\) for \(i\) or \(j\) in \(J\), \(2\leq j< i \leq \theta\). Moreover, as observed above, \(j'\in J'\) implies that \(m^1_{j'}\) and \(m^2_{j'}\) do not have the same parity as the big difference values on \(T_5,\ldots,T_{2\theta}\). Hence, it follows from (6) that \(\pi_i\nrightarrow \pi_j\) for \(i\) or \(j\) in \(J'\), \(2\leq j< i\leq \theta\).

So, after the flipping of all \(\pi_{\ell}\) with \(\ell\in J\cup J'\), the remaining conflicts \(\pi_i\rightarrow \pi_j\) with \(2\leq j < i\leq \theta\) are such that \(\{i,j\}\cap (J\cup J')=\emptyset\) . Consider any such conflict. If there is \(i'\neq i\) such that \(\pi_{i'}\rightarrow \pi_j\), then we know from (4) that \(i'=i-1\) or \(i+1\). Without loss of generality, we may assume \(i'=i+1\) (else we permute the roles of \(i\) and \(i'\)). Since none of \(j,i,i'\) belongs to \(J\cup J'\), there is no \(k\) such that \(\pi_k\rightarrow \pi_i\), \(\pi_k\rightarrow \pi_{i'}\) or \(\pi_j\rightarrow \pi_k\). Also, it follows from (4) that there is no \(k\neq i,i'\) such that \(\pi_k\rightarrow \pi_{j}\)

  • if \(i\leq 2\theta/3\), we flip \(\pi_j\). We then have \(\min\{|b^1_i|,|b^2_i|\}\geq 6\theta+3t-4(2\theta/3)-1=10\theta/3+3t-1\). It follows that \(j\leq 2\theta/9\) else \(\max\{|m^1_j|,|m^2_j|\}\leq 4\theta+3t-3(2\theta)/9-2=10\theta/3+3t-2\). Hence \(\min\{|b^1_j|,|b^2_j|\}\geq 6\theta+3t-4(2\theta/9)-1=46\theta/9+3t-1>4\theta+3t-2.\) Since the medium magnitudes are at most equal to \(4\theta+3t-2\), we cannot have \(\pi_j\rightarrow \pi_k\) after the flip of \(\pi_j\). Also, it follows from (7) that, after the flip of \(\pi_j\), we have \(\pi_k\nrightarrow \pi_{j}\) for \(j< k\leq \theta\). Hence, after the flip of \(\pi_j\), the difference labels on its two triangles are different from those on the other triangles \(T_k\), \(k\geq 5\).
  • If \(i> 2\theta/3\), we flip \(\pi_i\) and \(\pi_{i'}\) (if any). In this case, we have \(\max\{|m^1_{i'}|,|m^2_{i'}|\}< \max\{|m^1_i|,|m^2_i|\}\) \(\leq 4\theta+3t-3(2\theta/3)=2\theta+3t\). Since all big magnitudes on \(T_1,\ldots,T_{2\theta}\) are strictly larger than \(2\theta+3t\), we cannot have \(\pi_k\rightarrow \pi_i\) after the flip of \(\pi_i\) and \(\pi_{i'}\). Also, it follows from (6) that after the flip of \(\pi_i\) and \(\pi_{i'}\), we have \(\pi_i\nrightarrow \pi_{k}\) and \(\pi_{i'}\nrightarrow \pi_{k}\) for \(2\leq k< i\). Hence, after the flip of \(\pi_i\) and \(\pi_{i'}\), the difference labels on their triangles are different from those on the other triangles \(T_k\), \(k\geq 5\).

After all these flips, there is no \(\pi_i\rightarrow \pi_j\) with \(2\leq j< i\leq \theta\). We consider now triangles \(T_1,T_2,T_3,T_4\) involved in \(\pi_0\) and \(\pi_1\). If there is \(j\geq 2\) such that \(\pi_j\rightarrow \pi_1\) then we know from (5) that \(\pi_j\nrightarrow \pi_k\) for all \(2\leq k< j\). Hence, \(j\notin J\cup J'\). If, before the flips, there was \(i\) such that \(\pi_i\rightarrow \pi_j\), then \(i> 2\theta/3\). Indeed, we have seen above that if \(i\leq 2\theta/3\), then \(\min\{|b^1_j|,|b^2_j|\}>4\theta+3t-2\), which means that \(\pi_j\nrightarrow \pi_1\). So, \(\pi_j\) was not flipped, and by flipping \(\pi_1\), we get \(\pi_j\nrightarrow \pi_1\) for all \(2\leq j\leq \theta\).

Since the parity of \(m^0_1\) and \(m^0_2\) is the opposite of the parity of \(b^1_i\) and \(b^2_i\) for all \(i\geq 2\), we have \(\pi_j\nrightarrow \pi_0\) for all \(2\leq j\leq \theta\). Hence, the only possible remaining conflict is between \(\pi_0\) and \(\pi_1\). This can only occur if \(b^0_1=b^1_1\) and \(\pi_1\) was flipped. In such a case, we flip \(\pi_0\) to remove this last conflict.

Case B : \(n=2\theta+t\), \(\theta=3t+1\) We treat this case as the previous one. More precisely, the vertex labels \(f(v_i)\) on \(T_1,\ldots,T_{2\theta}\) are given in Table 4. Given a gdl \(f'\) for \(t\overrightarrow{\bf C_3}\) with at most one arc of magnitude \(3t-2\), and all other arcs of magnitude strictly smaller than \(3t-2\), we set \(f(v_i)=f'(v_i)+3\theta\) for \(i=6\theta+1,\ldots,6\theta+3t\). Again, one can easily check that \(f\) is a bijection between \(\{v_1,\ldots,v_{6\theta+3t}\}\) and \(\{1, \ldots,6\theta+3t=3n\}\).
Table 4. The labeling of \(T_1,\ldots,T_{2\theta}\) for case B.
Triangle \(T_i\) \(f(v_{3i-2})\) \(f(v_{3i-1})\) \(f(v_{3i})\)
\(T_1\) \(1\) \(2\theta+1\) \(6\theta+3t-3\)
\(T_2\) \(2\) \(6\theta+3t\) \(4\theta+3t\)
\(T_3\) \(3\) \(6\theta+3t-1\) \(2\theta+2\)
\(T_4\) \(4\) \(4\theta+3t-1\) \(6\theta+3t-2\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2k-1}\) \(2k-1\) \(2\theta+k\) \(6\theta+3t-2k+2\)
\(T_{2k}\) \(2k\) \(6\theta+3t-2k+1\) \(4\theta+3t-k+1\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2\theta-1}\) \(2\theta-1\) \(3\theta+3t+1\) \(4\theta+3t+1\)
\(T_{2\theta}\) \(2\theta\) \(4\theta+3t+2\) \(3\theta\)
The small, medium, and big difference labels for triangles \(T_1,\ldots,T_{2\theta}\) are given in Table 5. Again, the triangles are grouped in pairs, using two dummy triangles \(D_1\) and \(D_2\) which are paired with \(T_5\) and \(T_{2\theta-2}\), respectively. Notice that for every \(uv\) on a \(T_i\) with \(i\leq 2\theta\) and every \(u'v'\) on a \(T_j\) with \(j>2\theta\), we have \(f(v)-f(u)\neq f(v')-f(u')\) since the smallest possible magnitude for \(uv\) is \(\theta=3t+1\), while the largest possible magnitude for \(u'v'\) is \(3t-2\). Hence, in this case, we do not have to flip triangles \(T_{2\theta+1},\ldots,T_{2\theta+t}\). Note also that the largest magnitude is \(6\theta+3t-2= 3n-2\), and there is exactly one arc with this magnitude.
Table 5. The difference labels of the arcs of \(T_1,\ldots,T_{2\theta},D_1,D_2\) for case B.
Pair Triangle Small-dl Medium-dl Big-dl
\(\pi_0=(T_1,T_2)\) \(T_1\) \(2\theta\) \(4\theta+3t-4\) \(-(6\theta+3t-4)\)
\(T_2\)
\(-2\theta\) \(-(4\theta+3t-2)\) \(6\theta+3t-2\)
\(\pi_1=(T_3,T_4)\) \(T_3\) \(-(2\theta-1)\) \(-(4\theta+3t-3)\) \(6\theta+3t-4\)
\(T_4\) \(2\theta-1\) \(4\theta+3t-5\) \(-(6\theta+3t-6)\)
\(\pi_2=(D_1,T_5)\) \(D_1\) \(-(2\theta-2)\) \(-(4\theta+3t-5)\) \(6\theta+3t-7\)
\(T_5\) \(2\theta-2\) \(4\theta+3t-7\) \(-(6\theta+3t-9)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(\pi_k=(T_{2k},T_{2k+1})\) \(T_{2k}\) \(-(2\theta-k)\) \(-(4\theta+3t-3k+1)\) \(6\theta+3t-4k+1\)
\(k=3,\ldots,\theta-2\) \(T_{2k+1}\) \(2\theta-k\) \(4\theta+3t-3k-1\) \(-(6\theta+3t-4k-1)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(\pi_{\theta-1}=(T_{2\theta-2},D_{2})\) \(T_{2\theta-1}\) \(-(\theta+1)\) \(-(\theta+3t+4)\) \(2\theta+3t+5\)
\(D_{2}\) \(\theta+1\) \(\theta+3t+2\) \(-(2\theta+3t+3)\)
\(\pi_{\theta}=(T_{2\theta-1},T_{2\theta})\) \(T_{2\theta-1}\) \(\theta\) \(\theta+3t+2\) \(-(2\theta+3t+2)\)
\(T_{2\theta}\) \(-\theta\) \(-(\theta+3t+2)\) \(2\theta+3t+2\)
Since \(\theta=3t+1\), we have \(\theta+3t+2=2\theta+1\), which means that no medium-dl can be equal to a small-dl. The small, medium and big difference labels on \(T_1,\ldots,T_{2\theta-2}\) are exactly the same as those of Table 3. Using the same arguments, as in the previous case, we can avoid conflicts involving medium and big difference labels of \(\pi_0,\ldots,\pi_{\theta-1}\). Consider now \(\pi_{\theta}\):
  • The medium difference values of \(\pi_{\theta}\) can only be conflicting with the medium-dl of \(D_2\), but we don't care about such a conflict since \(D_2\) is a dummy triangle;
  • The big difference values of \(\pi_{\theta}\) can only be conflicting with the medium-dl of a \(T_k\). For this to happen, we should have \(2\theta+3t+2\) equal to \(4\theta+3t-3k+1\) or \(4\theta+3t-3k-1\), or equivalently \(k\) equal to \(\frac{2\theta-1}{3}=\frac{6t+1}{3}\) or \(\frac{2\theta-3}{3}=\frac{6t-1}{3}\), which is impossible since \(k\) is an integer.
Case C : \(n=2\theta+t-1\), \(\theta\in\{3t-1,3t\}\)
Again, consider the vertex labels \(f(v_i)\) on \(T_1,\ldots,T_{2\theta-1}\) shown in Table \ref{num3}. Given a gdl \(f'\) for \(t\overrightarrow{\bf C_3}\) with at most one arc of magnitude \(3t-2\), and all other arcs of magnitude strictly smaller than \(3t-2\), we set \(f(v_i)=f'(v_i)+3\theta-1\) for \(i=6\theta-2,\ldots,6\theta+3t-3\). One can easily check \(f\) is a bijection between \(\{v_1,\ldots,v_{6\theta+3t-3}\}\) and \(\{1, \ldots,6\theta+3t-3=3n\}\). The small, medium, and big difference labels for triangles \(T_1,\ldots,T_{2\theta}\) are given in Table \ref{dl3}.
Table 6. The labeling of \(T_1,\ldots,T_{2\theta-1}\) for case C.
Triangle \(T_i\) \(f(v_{3i-2})\) \(f(v_{3i-1})\) \(f(v_{3i})\)
\(T_1\) \(1\) \(2\theta\) \(6\theta+3t-6\)
\(T_2\) \(2\) \(6\theta+3t-3\) \(4\theta+3t-2\)
\(T_3\) \(3\) \(6\theta+3t-4\) \(2\theta+1\)
\(T_4\) \(4\) \(4\theta+3t-3\) \(6\theta+3t-5\)
\(T_5\) \(5\) \(2\theta+2\) \(6\theta+3t-7\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2k}\) \(2k\) \(6\theta+3t-2k-2\) \(4\theta+3t-k-1\)
\(T_{2k+1}\) \(2k+1\) \(2\theta+k\) \(6\theta+3t-2k-3\)
\(k=3,\ldots,\theta-3\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
Table 7. The difference labels of the arcs of \(T_1,\ldots,T_{2\theta-1},D_1\) for case C.
Pair Triangle Small-dl Medium-dl Big-dl
\(\pi_0=(T_1,T_2)\) \(T_1\) \(2\theta-1\) \(4\theta+3t-6\) \(-(6\theta+3t-7)\)
\(T_2\) \(-(2\theta-1)\) \(-(4\theta+3t-4)\) \(6\theta+3t-5\)
\(\pi_1=(T_3,T_4)\) \(T_3\) \(-(2\theta-2)\) \(-(4\theta+3t-5)\) \(6\theta+3t-7\)
\(T_4\) \(2\theta-2\) \(4\theta+3t-7\) \(-(6\theta+3t-9)\)
\(\pi_2=(D_1,T_5)\) \(D_1\) \(-(2\theta-3)\) \(-(4\theta+3t-7)\) \(6\theta+3t-10\)
\(T_5\) \(2\theta-3\) \(4\theta+3t-9\) \(-(6\theta+3t-12)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(\pi_k=(T_{2k},T_{2k+1})\) \(T_{2k}\) \(-(2\theta-k-1)\) \(-(4\theta+3t-3k-1)\) \(6\theta+3t-4k-2\)
\(k=3,\ldots,\theta-1\) \(T_{2k+1}\) \(2\theta-k-1\) \(4\theta+3t-3k-3\) \(-(6\theta+3t-4k-4)\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)

Again, the triangles are grouped in pairs, using one dummy triangle \(D_1\) which is paired with \(T_5\). Notice that for every \(uv\) on a \(T_i\) with \(i\leq 2\theta-1\) and every \(u'v'\) on a \(T_j\) with \(j>2\theta-1\), we have \(f(v)-f(u)\neq f(v')-f(u')\) since the smallest possible magnitude for \(uv\) is \(\theta\geq 3t-1\), while the largest possible magnitude for \(u'v'\) is \(3t-2\). Hence, also in this case, we do not have to flip \(T_{2\theta+1},\ldots,T_{2\theta+t}\). Note also that the largest magnitude is \(6\theta+3t-5=3n-2\), and there is exactly one arc with this magnitude.

Since \(\theta< 3t+1\), we have \(\theta+3t>2\theta-1\), which means that no medium-dl can be equal to a small-dl. Using the same arguments, as in the previous cases, we can avoid conflicts involving \(\pi_2,\ldots,\pi_{\theta-1}\).

If there is \(j\geq 2\) such that \(\pi_j\rightarrow \pi_0\), then assume there is \(i>j\) such that \(\pi_i\rightarrow \pi_j\). If \(i\leq 2\theta/3\), then \(\min\{|b^1_i|,|b^2_i|\}\geq 6\theta+3t-4(2\theta/3)-4=10\theta/3+3t-4\). It follows that \(j\leq (2\theta+3)/9\) else \(\max\{|m^1_j|,|m^2_j|\}\leq 4\theta+3t-3(2\theta+3)/9-4=10\theta/3+3t-5\). Hence \(\min\{|b^1_j|,|b^2_j|\}\geq 6\theta+3t-4(2\theta+3)/9-4=46\theta/9+3t-48/9>4\theta+3t-4\), which contradicts \(\pi_j\rightarrow \pi_0\). Hence, we necessarily have \(i>2\theta/3\), and since \(j\) cannot belong to \(J\cup J'\), we conclude that \(j\) was not flipped. Hence, by flipping \(\pi_0\), we get \(\pi_j\nrightarrow \pi_0\) for all \(j\geq 2\).

Since the parity of \(m^1_1\) and \(m^2_1\) is the opposite of the parity of \(b^1_i\) and \(b^2_i\) for all \(i\geq 2\), we have \(\pi_j\nrightarrow \pi_1\) for all \(j\geq 2\). Hence, the only possible remaining conflict is between \(\pi_0\) and \(\pi_1\). This can only occur if \(b^1_0=b^1_1\) and \(\pi_1\) was flipped. In such a case, we flip \(\pi_1\) to remove this last conflict.

Case D : \(n=2\theta+t-1\), \(\theta=3t+1\) Consider the vertex labels \(f(v_i)\) on \(T_1,\ldots,T_{2\theta-1}\) shown in Table 8. Given a gdl \(f'\) for \(t\overrightarrow{\bf C_3}\) with at most one arc of magnitude \(3t-2\), and all other arcs of magnitude strictly smaller than \(3t-2\), we set \(f(v_i)=f'(v_i)+3\theta-1\) for \(i=6\theta-2,\ldots,6\theta+3t-3\). One can easily check \(f\) is a bijection between \(\{v_1,\ldots,v_{6\theta+3t-3}\}\) and \(\{1, \ldots,6\theta+3t-3=3n\}\).

The small, medium, and big difference labels for triangles \(T_1,\ldots,T_{2\theta}\) are given in Table \ref{dl4}. Again, the triangles are grouped in pairs, using one dummy triangle \(D_1\) which is paired with \(T_5\). Notice that for every \(uv\) on a \(T_i\) with \(i\leq 2\theta-1\) and every \(u'v'\) on a \(T_j\) with \(j>2\theta-1\), we have \(f(v)-f(u)\neq f(v')-f(u')\) since the smallest possible magnitude for \(uv\) is \(\theta-2=3t-1\), while the largest possible magnitude for \(u'v'\) is \(3t-2\). Hence, also in this case, we do not have to flip \(T_{2\theta+1},\ldots,T_{2\theta+t}\). Note also that the largest magnitude is \(6\theta+3t-5=3n-2\), and there is only one arc with this magnitude.

Since \(\theta=3t+1\), we have \(\theta+3t+1=2\theta\), which means that no medium-dl can be equal to a small-dl. The small, medium and big difference labels on \(T_1,\ldots,T_{2\theta-5}\) are exactly the same as those of Table \ref{dl3}. Using the same arguments, as in the previous case, we can avoid conflicts involving \(\pi_0,\ldots,\pi_{\theta-3}\).

Table 8. The labeling of \(T_1,\ldots,T_{2\theta-1}\) for case D.
Triangle \(T_i\) \(f(v_{3i-2})\) \(f(v_{3i-1})\) \(f(v_{3i})\)
\(T_1\) \(1\) \(2\theta\) \(6\theta+3t-6\)
\(T_2\) \(2\) \(6\theta+3t-3\) \(4\theta+3t-2\)
\(T_3\) \(3\) \(6\theta+3t-4\) \(2\theta+1\)
\(T_4\) \(4\) \(4\theta+3t-3\) \(6\theta+3t-5\)
\(T_5\) \(5\) \(2\theta+2\) \(6\theta+3t-7\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2k}\) \(2k\) \(6\theta+3t-2k-2\) \(4\theta+3t-k-1\)
\(T_{2k+1}\) \(2k+1\) \(2\theta+k\) \(6\theta+3t-2k-3\)
\(k=3,\ldots,\theta-3\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2\theta-4}\) \(2\theta-4\) \(4\theta+3t-1\) \(3\theta+3t+1\)
\(T_{2\theta-3}\) \(2\theta-3\) \(4\theta+3t+2\) \(3\theta-2\)
\(T_{2\theta-2}\) \(2\theta-2\) \(3\theta+3t\) \(4\theta+3t+1\)
\(T_{2\theta-1}\) \(2\theta-1\) \(3\theta-1\) \(4\theta+3t\)

Table 9. The difference labels of the arcs of \(T_1,\ldots,T_{2\theta-1},D_1\) for case D.
Triangle \(T_i\) \(f(v_{3i-2})\) \(f(v_{3i-1})\) \(f(v_{3i})\)
\(T_1\) \(1\) \(2\theta\) \(6\theta+3t-6\)
\(T_2\) \(2\) \(6\theta+3t-3\) \(4\theta+3t-2\)
\(T_3\) \(3\) \(6\theta+3t-4\) \(2\theta+1\)
\(T_4\) \(4\) \(4\theta+3t-3\) \(6\theta+3t-5\)
\(T_5\) \(5\) \(2\theta+2\) \(6\theta+3t-7\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2k}\) \(2k\) \(6\theta+3t-2k-2\) \(4\theta+3t-k-1\)
\(T_{2k+1}\) \(2k+1\) \(2\theta+k\) \(6\theta+3t-2k-3\)
\(k=3,\ldots,\theta-3\)
\(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
\(T_{2\theta-4}\) \(2\theta-4\) \(4\theta+3t-1\) \(3\theta+3t+1\)
\(T_{2\theta-3}\) \(2\theta-3\) \(4\theta+3t+2\) \(3\theta-2\)
\(T_{2\theta-2}\) \(2\theta-2\) \(3\theta+3t\) \(4\theta+3t+1\)
\(T_{2\theta-1}\) \(2\theta-1\) \(3\theta-1\) \(4\theta+3t\)
Consider now \(\pi_{\theta-2}\) and \(\pi_{\theta-1}\). The medium magnitudes \(|m^1_{\theta-2}|,|m^2_{\theta-2}|,|m^1_{\theta-1}|\) and \(|m^2_{\theta-1}|\) do not appear on any other triangle. Also, the medium magnitudes on a \(\pi_k\) with \(2\leq k\leq \theta-3\) are equal to \(4\theta+3t-3k-1=15t-3k+3\) or \(4\theta+3t-3k-3=15t-3k+1\), which mean that they are all equal to \(0\), or \(1\mod 3\). Hence, the big magnitudes \(|b^2_{\theta-2}|=|b^2_{\theta-1}|=2\theta+3t+3=9t+5\) do not appear on any other triangle as medium magnitude. Therefore, these two big magnitudes will not be conflicting if we either flip both \(\pi_{\theta-1}\) and \(\pi_{\theta-2}\), or none of them. The only remaining possible conflicts involve a medium-dl on a \(T_i\) (\(i< \theta-2\)) and \(b^1_{\theta-2}\) or \(b^1_{\theta-1}\) Assume there is a triangle \(T_i\) with magnitude \(2\theta+3t+1=|b^1_{\theta-1}|\). This means that \(2\theta+3t+1\leq 4\theta+3t-3i-1\), which is equivalent to \(i\leq (2\theta-2)/3\). Hence, \(\pi_i\) was not flipped. Also, if there is a triangle \(T_j\) with magnitude \(2\theta+3t+5=b^1_{\theta-2}\), then \(j< i\leq (2\theta-2)/3\), which means that \(\pi_j\) was not flipped. Now,
  • If there is a triangle \(T_i\) with medium-dl \(-(2\theta+3t+1)\), then \(m^1_{i}=b^1_{\theta-1}\), and \(m^2_{i-2}=-b^1_{\theta-1}+4=2\theta+3t+5=b^1_{\theta-2}\), and we can avoid both conflicts by flipping both \(\pi_{\theta-1}\) and \(\pi_{\theta-2}\);
  • If there is a triangle \(T_j\) with medium-dl \(2\theta+3t+5\), then \(m^2_{j}=b^1_{\theta-2}\), and \(m^1_{j+2}=-b^1_{\theta-2}+4=-(2\theta+3t+1)=b^1_{\theta-1}\), and we can avoid both conflicts by flipping both \(\pi_{\theta-1}\) and \(\pi_{\theta-2}\).
  • If there is no triangle with medium-dl \(-(2\theta+3t+1)\) or \(2\theta+3t+5\), there is no conflict.

We already know from Lemma \ref{C2k+1C4} that \(\overrightarrow{\bf C_4}+\overrightarrow{\bf C_3}\) has a gdl. We now show that this is also the case for \(\overrightarrow{\bf C_4}+n\overrightarrow{\bf C_3}\), \(n\geq 2\).

Lemma\label{C4C3} \(\overrightarrow{\bf C_4}+n\overrightarrow{\bf C_3}\) has a gdl for every \(n\ge 1\).

Proof. The graphs in Figures 9, 10, 11, 12, 13, 14 and 15 show the existence of the desired gdl for \(2\leq n \leq 8\).

Figure 9. \(2\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\).

Figure 10. \(3\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\).

Figure 11. \(4\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\).

Figure 12. \(5\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\).

Figure 13. \(6\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\).

Figure 14. \(7\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\).

Figure 15. \(8\overrightarrow{\bf C_3}+\overrightarrow{\bf C_4}\)

For \(n\ge 9\), we know from Lemma 7 that there is a gdl for \((n+1)\overrightarrow{\bf C_3}\), which can be obtained by performing a set \(F\) of flips, starting from the labelling \(f\) defined in Tables 2, 4, 6, and 8 for cases A, B, C and D, respectively. We distinguish two cases.
  • For cases A and B, we consider the graph \(G\) obtained from \((n+1)\overrightarrow{\bf C_3}\) by inserting a new vertex \(v_0\) between \(v_5\) and \(v_6\). More precisely, \(G\) is obtained by replacing \(T_2\) in \((n+1)\overrightarrow{\bf C_3}\) by a \(\overrightarrow{\bf C_4}\) with vertex set \(\{v_0,v_4,v_5,v_6\}\) and arc set \(\{v_4v_5,v_5v_0,v_0v_6,v_6v_4\}\). We then define \(f'\) by setting \(f'(v_0)=1\) and \(f'(v_i)=f(v_i)+1\) for \(i=1,\ldots,3(n+1)\). Clearly, \(f'\) is bijection between \(\{v_0,\ldots,v_{3(n+1)}\}\) and \(\{1,\ldots,3n+4\}\). In order to prove that by performing exactly the same set \(F\) of flips, we get a gdl for \(G\), it is sufficient to show that the difference labels on \(v_5v_0\) and \(v_0v_6\) cannot appear on other arcs of \(G\).
    • \(\vert f'(v_0)-f'(v_5)\vert=\vert 1-(6\theta+3t+1)\vert =6\theta+3t\), which means that \(v_5v_0\) has a magnitude larger than that of any other arc in \(G\).
    • \(f'(v_6)-f'(v_0)=(4\theta+3t+1)-1=4\theta+3t\). Since this value is strictly larger than any other medium magnitude in \(G\), the difference label on \(v_0v_6\) can only be conflicting with a big-dl on a \(T_i\) with \(i\geq 5\). But this does not occur since these big difference labels have the opposite parity of \(4\theta+3t\).
  • For cases C and D, we consider the graph \(G\) obtained from \((n+1)\overrightarrow{\bf C_3}\) by inserting a new vertex \(v_0\) between \(v_9\) and \(v_7\). More precisely, \(G\) is obtained by replacing \(T_3\) in \((n+1)\overrightarrow{\bf C_3}\) by a \(\overrightarrow{\bf C_4}\) with vertex set \(\{v_0,v_7,v_8,v_9\}\) and arc set \(\{v_7v_8,v_8v_9,v_9v_0,v_0v_7\}\). We then define \(f'\) by setting \(f'(v_0)=3n+4=6\theta+3t-2\) and \(f'(v_i)=f(v_i)\) for \(i=1,\ldots,3(n+1)\). Clearly, \(f'\) is bijection between \(\{v_0,\ldots,v_{3(n+1)}\}\) and \(\{1,\ldots,3n+4\}\). In order to prove that by performing exactly the same set \(F\) of flips, we get a gdl for \(G\), it is sufficient to show that the difference labels on \(v_0v_7\) and \(v_9v_0\) do not appear on other arcs of \(G\).
    • \(f'(v_7)-f'(v_0)=3-(6\theta+3t-2) =-(6\theta+3t-5)\). The same difference label appears on \(T_2\) but with an opposite sign. These two arcs could be conflicting if exaclty one of \(\pi_{0}\) and \(\pi_{1}\) is flipped, but this does not occur since \(T_1\) and \(T_3\) have big difference labels of the same magnitude, but with opposite signs.
    • \(f'(v_0)-f'(v_9)=(6\theta+3t-2)-(2\theta+1)=4\theta+3t-3\). Since this value is strictly larger than any other medium magnitude in \(G\), the difference label on \(v_9v_0\) can only be conflicting with a big-dl on a \(T_i\) with \(i\geq 5\). But this does not occur since these big difference labels have the opposite parity of \(4\theta+3t-3\).

All together, the results shown in the eight lemmas of this section can be summarized as follows.

Theorem 1. If \(G\) is the disjoint union of circuits, among which at most one has an odd length, or all circuits of odd length have 3 vertices, then \(G\) has a gdl, unless \(G=\overrightarrow{\bf C_{3}}\) or \(G=\overrightarrow{\bf C_{2}}+\overrightarrow{\bf C_{3}}\).

3. Conclusion

As mentioned in the introduction, it is an open question to determine the values of \(n\) for which \(n\overrightarrow{\bf C_{3}}\) has a graceful labeling, i.e., an injection \(f:V\rightarrow\{0,1,\ldots,q\}\) such that, when each arc \(xy\) is assigned the label \((f(y)-f(x))\ (mod\ q+1)\), the resulting arc labels are distinct. Considering graceful difference labelings, we could show that \(n\overrightarrow{\bf C_{3}}\) has a gdl if and only if \(n\geq 2\). We have also proved additional cases that support the following conjecture.

Conjecture 1. If \(G\) is the disjoint union of circuits, then \(G\) has a gdl, unless \(G=\overrightarrow{\bf C_{3}}\) or \(G=\overrightarrow{\bf C_{2}}+\overrightarrow{\bf C_{3}}\).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest:

The authors declare no conflict of interest.

References

  1. Gallian, J. A. (2009). A dynamic survey of graph labeling. The electronic journal of combinatorics, 16(6), 1-219.[Google Scholor]
  2. Rosa, A. (1966, July). On certain valuations of the vertices of a graph. In Theory of Graphs (Internat. Symposium, Rome (pp. 349-355).[Google Scholor]
  3. Golomb, S. W. (1972). How to number a graph. In Graph theory and computing (pp. 23-37). Academic press.[Google Scholor]
  4. Feng, W., & Xu, C. (2011). A survey of the gracefulness of digraphs. International Journal of Pure and Applied Mathematics, 69(3), 245.[Google Scholor]
  5. West, D. B. (1996). Introduction to graph theory (Vol. 2). Upper Saddle River, NJ: Prentice hall.[Google Scholor]
]]>
A comparative analysis of the travelling salesman problem: Exact and machine learning techniques https://old.pisrt.org/psr-press/journals/odam-vol-2-issue-3-2019/a-comparative-analysis-of-the-travelling-salesman-problem-exact-and-machine-learning-techniques/ Sun, 03 Nov 2019 11:01:04 +0000 https://old.pisrt.org/?p=3395
ODAM-Vol. 2 (2019), Issue 3, pp. 23 – 37 Open Access Full-Text PDF
Jeremiah Ishaya, Abdullahi Ibrahim, Nassirou Lo
Abstract: Given a set of locations or cities and the cost of travel between each location, the task is to find the optimal tour that will visit each locations exactly once and return to the starting location. We solved a routing problem with focus on Traveling Salesman Problem using two algorithms. The task of choosing the algorithm that gives optimal result is difficult to accomplish in practice. However, most of the traditional methods are computationally bulky and with the rise of machine learning algorithms, which gives a near optimal solution. This paper studied two methods: branch-and-cut and machine learning methods. In the machine learning method, we used neural networks and reinforcement learning with 2-opt to train a recurrent network that predict a distribution of different location permutations using the negative tour-length as the reward signal and policy gradient to optimize the parameters of recurrent network. The improved machine learning with 2-opt give near-optimal results on 2D Euclidean with upto 200 nodes.
]]>

Open Journal of Discrete Applied Mathematics

A comparative analysis of the travelling salesman problem: Exact and machine learning techniques

Jeremiah Ishaya, Abdullahi Ibrahim\(^1\), Nassirou Lo
Department of Mathematical Science, African Institute for Mathematical Sciences, Mbour, Senegal.; (J.I & N.L)
Department of Mathematical Science, Baze University Abuja, Nigeria.; (A.I)
\(^{1}\)Corresponding Author: abdullahi.ibrahim@bazeuniversity.edu.ng; Tel.: +2348067497949

Abstract

Given a set of locations or cities and the cost of travel between each location, the task is to find the optimal tour that will visit each locations exactly once and return to the starting location. We solved a routing problem with focus on Traveling Salesman Problem using two algorithms. The task of choosing the algorithm that gives optimal result is difficult to accomplish in practice. However, most of the traditional methods are computationally bulky and with the rise of machine learning algorithms, which gives a near optimal solution. This paper studied two methods: branch-and-cut and machine learning methods. In the machine learning method, we used neural networks and reinforcement learning with 2-opt to train a recurrent network that predict a distribution of different location permutations using the negative tour-length as the reward signal and policy gradient to optimize the parameters of recurrent network. The improved machine learning with 2-opt give near-optimal results on 2D Euclidean with upto 200 nodes.

Keywords:

Combinatorial optimization, traveling salesman problem, branch-and-cut, machine learning, vehicle routing problem, routing problem.

1. Introduction

The Traveling Salesman Problem (TSP) is one of the variant of Vehicle Routing Problem (VRP) which is a classical and widely studied problem in combinatorial optimization [1]. TSP has been studied in Operations Research (OR), engineering and computer science since \(1950s\) and several techniques been developed to solve this kind of problem [2]. TSP describes a salesman who must travel through \(n\) cities. The order of visiting the cities is not of importance, as long as the salesman visit every city exactly once and return to the starting city [3]. The cities are connected to one another through a weighted link.

In graph theory, TSP can be understood as to searching for the shortest possible Hamiltonian cycle in a graph, in which nodes of the graph represents cities location and edges represents a path from one city to another [3]. Finding optimal solution for TSP is NP-hard, even in 2D case [4]. TSP can be represented as a graph, where the nodes of the graph represents cities and the edges or arcs represent direct routes between the nodes. The goal is to find the path with the least accumulated weights.

In real life scenario, one can expect quick and near optimal solutions rather than the optimal one. As a result, depending on the size of the problem, heuristics or machine learning methods may be used to find optimum or near optimum solutions. Geldenhuys in [5], implemented the branch-and-cut algorithm to solve a large scale TSP where the problem has to be solved several times using \textit{branch-and-cut} algorithm before a solution to the TSP was obtained. Using large-size instances of the TSP, a substantial portion of the computation time of the entire branch and cut algorithm is spent in linear program optimizer. In their work, they constructed a full implementation of branch and cut algorithm, utilizing the special structure, however, did not implement all of the refinements [6], and used some classes of TSP constraints such as Sub-tour elimination, 2-Matching, Comb and Clique-tree inequalities. Their result were compared with that of [7] and realized that the previous outperform theirs, also, realized how important it is to have more classes of constraints which are essential for solving large instances of the TSP. Only implemented a subtour elimination constraints which has less accuracy in terms of solution as compared to the previous studies and adding more classes such as the ones mentioned above will improved the performance of the [7].

TSP can be classified into the following categories:
  • (i) Symmetric Traveling Salesman Problem (s-TSP): Let \(V ={v_1 , ... , v_n }\) be a set of cities, \(A={( p , q ) : p , q \in V }\) be the set of edges, and \(d_{pq} =d_{qp}\) be a cost measure associated with the edge \((p, q) \in A\) which is symmetric. The s-TSP is the problem of finding then, a minimal length closed tour that visit each city once. In this case cities \(v_i \in V\) are given by their coordinates \((x_i , y_i )\) and \(d_rs\) is the Euclidean distance between \(r\) and \(s\) then we have an Euclidean TSP.
  • (ii) Asymmetric Traveling Salesman Problem (a-TSP): From the above definition, if the cost measure \(d_{pq} \neq d_{qp}\) for at least one \(( p , q )\) then the TSP becomes an aTSP.
  • (iii) Multiple Traveling Salesman Problem (m-TSP): Given a set of nodes, let there be m salesmen located at a single depot node. The remaining nodes (cities) that are to be visited are intermediate nodes. Then, the mTSP consists of finding tours for all m salesmen, who all start and end at the same deport, such that each intermediate node is visited exactly once and the total cost of visiting all nodes is minimized.

Nazari et al.[8] presented an end-to-end reinforcement learning framework which can be used to solve the VRP. The model was applied on TSP and the approach outperforms google's OR-tools and classical heuristics on medium-sized problem in terms of the solution quality with computational time. Also, [9] considered a similar technique by solving a CVRP with exact and machine learning methods. Similarly, [10] applied branch and cut algorithm on TSP and solve instance upto \(200\) nodes.

Applying neural networks to combinatorial optimization problem has a very successful history, where the majority of research focuses on the TSP [11]. One of the earliest proposals, is the use of Hopfield networks [12] for the TSP. The authors modify the networks energy function to make it equivalent to the TSP objective and use Lagrange multipliers to penalize the violations of the problems constraints. A limitation of this approach is that, its sensitive to hyper-parameters and the parameter initialization as analyzed by [13]. Abdoun and Abouchabaka in [14], investigated some of the machine learning heuristics for solving the (TSP). In their paper, they considered Nearest Neighbor, Genetic Algorithm, Ant Colony Optimization and Q-Learning. Where they considered several well-known TSPLIB instances for comparison purposes. In which the Q-learning capability was improved by modifications with an additional support with 2-opt localization approach. The results they had are encouraging for those instances that have less than 200 cities.

Applications of Vehicle Routing Problem cut across several areas which includes; Courier service [15], real-time delivery of customers demand [16], milk runs dispatching system in real time [17], milk collection problem [18]. This study is aimed at developing a machine learning algorithm used in solving TSP and compare the solution exact method in order to determine the optimal gap . To achieving this, we set the following objectives:

  • (i) Develop a mathematical formulation for TSP,
  • (ii) Develop a machine learning algorithm for solving TSP,
  • (iii) Apply this method on solving large size problem.

Inspired by advancements in sequence-to-sequence learning [19, 20, 21, 22]. Our new contribution differs from proposed neural combinatorial optimization for solving TSP problems in [22]. The previous frame work was improved by adding local search (2-opt) to the algorithm.

Local search is one of the oldest and the intuitive optimization technique which consist in starting from a solution and continue to improve it by performing a typical local perturbations which is call moves. In optimization, 2-opt is one of the simple local search algorithm for solving the traveling salesman problem. The main idea behind this method is to take route that crosses over itself and reorder it so that it does not i.e, it gradually improve an initially given feasible answer (local search) until it reaches a local optimum and there is no more improvements can be made. The improvement are done using what is called ``inversion''. It is an optimized solution for both symmetric and asymmetric instances. In our case, we are more concerned in using it for 2D symmetric TSP problem. the technique is not guaranty to find global optimum solution but instead it returns an answer that is usually said to be 2-optimal and making it heuristic.

The remainder of our paper is broken down as follows. Section 2 first introduces the mathematical model of TSP and algorithms proposed. The various techniques discussed are then applied on TSP in Section 3. A summary of the findings and proposed future research directions are finally given in Section 4.

2. Mathematical formulation and methods

We shall talk about the Mathematical Formulation of TSP, branch-and-cut and the machine learning algorithms.

Nomenclatures

  • G(V,E) represent a complete graph G.
  • \(V = \{v_1, v_2, \dots, v_n\}\); denotes set of cities.
  • E \( = \{(v_{i}, v_{j}), v_0 \le i,j \le m \} \) edges between \(V\).
  • \(d_{ij} = c:\) is the nonnegative cost-matrix of distance travel.
  • Decision variable is defined as; \( y_{ij} = \begin{cases} 1 \hspace{0.3cm} \text{if edge} \hspace{0.3cm} (i,j) \in \text{E is in tour } \\ 0 \hspace{0.3cm} \text{Otherwise} \end{cases} \)
  • \(y^{*}\) is the optimal solution.
  • \(\gamma\) is a tour.
  • \(\vert \vert . \vert \vert_{2}\) is euclidean norm.
  • x represent sequence of cities.
  • \(\mathcal{P}(\gamma \vert x)\) is a parameter of stochastic policy
  • \(\{d_{e}c_{i}\}\) sequences of latent memory states
  • \(\theta\) is parameter of the pointer network.
  • L is the expected tour length.
  • \(\delta(v)\) edges that meet vertices V.
  • \(y(\delta(v))\) sums over all variables
  • \(G(ref, q)\) is a glimpse function
  • T is the hyperparameters temperature
  • C is a hypaparameters that controls the range of the logits

2.1. Mathematical formulation of TSP using ILP

There are several mathematical formulations for TSP [19]. Employing a variety of constraints that enforce the requirements of the problem in order to demonstrate how such a formulation is used in the comparative analysis as follows:

Given a complete graph \(G = (V,E)\) with \(\vert V \vert = n\) and \( \vert E \vert = m = \frac{n(n-1)}{2}\), and nonnegative cost of distance traveled, \( d_{ij}\), containing each vertex exactly once. Introducing binary variable; \( y_{ij}\) for the possible inclusion of any edge \( (i,j) \in E \) in the tour we get the following classical ILP formulation;

Recall in section one, since we can travel from one city to another, the graph is complete. That is to say, there is an link/edge between every pair of nodes. For each edge in the graph, we associate a binary variable as follows $$ y_{ij} = \begin{cases} 1 \hspace{0.3cm} \text{if edge} \hspace{0.3cm} (i,j) \in \text{E is in tour } \\ 0 \hspace{0.3cm} \text{Otherwise} \end{cases} $$ Also since the edges are undirected, it suffices to include only edges with \( i < j \) in the model. Furthermore, since we are minimizing the total distance traveled during the tour, so we calculate \( d_{ij}\) between each pair of nodes \(i\) and \,\(j\). So the total distance travelled is then the sum of all the distances of the edges which are included in the tour as follows.\
\begin{equation} \text{total costs} = \sum_{((i,j) \in E) } d_{ ij } y_{ij.} \label{opt} \end{equation}
(1)
Since the tour can only pass through each city exactly once, then each node in the graph should have exactly one incoming and one outgoing edge i.e for every \(i\) node, exactly two of \(y_{ij}\) binary variables should be equal to 2. And we write it as follows,
\begin{equation} \sum_{(j \in V) } y_{ij} = 2 \, \, \forall i \in V. \label{opt2} \end{equation}
(2)
Furthermore, eliminating sub tours that might arise from the above constraint, we add the following constraints;
\begin{equation} \sum_{{ i,j \in S , i \neq j}} y_{ij} \le \vert S \vert - 1, \forall \, \, S \subset V, S \neq \emptyset \label{sec} \end{equation}
(3)
This constraints requre that for each proper(non-empty) subset of the set of cities V, the number of edges between the nodes of S must be at most \(\vert S \vert - 1.\) Therefore, the final integer linear program of our TSP formulation is as follows; \begin{equation*} Min \sum_{{(i,j) \in E }} d_{ij} y_{ij}, \end{equation*} subject to: \begin{equation*} \sum_{j \in V } y_{ij} = 2 \, \, \forall i \in V, \end{equation*}
\begin{equation}\label{all} \sum_{i,j \in S , i \neq j} y_{ij} \le \vert S \vert - 1, \forall \, \, S \subset V, S \neq \emptyset, y_{ij} \in \{0,1 \}. \end{equation}
(4)
And if the set of cities \(V\) is of size \(n\) , then there are \( 2^{n} -2 \) subset of \(S\) of \(V\), excluding \( S=V \) and \( S = \emptyset\), where Equation (1) defines the objective function, Equation (2) is the degree equation for each vertex, Equation (3) are the sub tour elimination constraints (SEC), which forbid solution consisting of several disconnected tours, and Equation (4) defines the integrality constraints. Also note that some of the SEC are redundant: for the vertex sets \( S \subset V, S \neq \emptyset\), and \( S^{'} = V \backslash S \) we get pairs of SEC both enforcing the connection os \( S\) and \( S^{'} \).

2.1.1. LP relaxation for TSP

The LP Relaxation for TSP can be described as follows:
Given a complete graph \( G = (V,E) \) with edge costs \(c = ( d_{ij}:i,j \in E )\), the relaxations have variables \( x =(y_{ij} :i,j \in E).\) From our model, we had introduced an equation for which each vertex \(v\) that requires the variables corresponding to edges having \(v\) as an end to sum up to \(2\). And these degree equations are as follows:
\begin{equation} y(\delta(v)) = 2 \hspace{0.3cm} \text{for all } v \in V \end{equation}
(5)
Also since \( \delta(v) \) is the set of edges that meet vertices \(v\), and \( y(\delta(v))\) sums over all variables in this set, the problem becomes, \begin{eqnarray*} Min \hspace{0.2cm}c^{T}y, \end{eqnarray*} subject to:
\begin{eqnarray} y(\delta(v)) = 2 \hspace{0.3cm} \text{for all vertices } v , 0 \leq y_{e} \leq 1 \hspace{0.3cm} \text{for all edges e} \end{eqnarray}
(6)
which is called the degree of LP Relaxation, or sometimes the assignment LP, where \(c^{T} \) is the transpose of c-vector. Also given a non-empty proper subset \(S\) of \(V\), the subtours inequality for \(S\) requires that the variables corresponds to edges joining vertices in \(S\) to vertices in \( V-S\) sum to at least 2. The inequality can be written as
\begin{equation} y(\delta(v)) \geq 2 \end{equation}
(7)
Therefore, The subtour relaxation of the TSP is, \begin{equation*} \text{Min} c^{T}y, \end{equation*} subject to: \begin{equation*} y(\delta(v)) = 2 \hspace{0.3cm} \text{for all vertices } v \end{equation*}
\begin{equation} y(\delta(S)) \geq 2 \text{ for all} \, \, S \subset V, S \neq V, \vert S \vert \geq 3, 0\leq y_{e} \leq 1 \hspace{0.3cm} \text{for all edges e}.\\ \end{equation}
(8)
Note that the LP problem has an exponential number of constraints and cannot be solved with an explicit formulation, but as part of the cutting plane method for the TSP. Therefore, the general form of the TSP relaxations is of the form
\begin{equation} \begin{array}{ll} \text{Min} & c^{T}y \\ \text{Subject to} & y(\delta(v)) = 2 \hspace{0.3cm} \text{for all vertices } v \\ & Cy \leq d, 0\leq y_{e} \leq 1 \, \, \text{for all edges e}, \end{array} \end{equation}
(9)
where \( Cy \leq d\) is the system of \(m\) inequalities satisfied by all tours.

2.2. Branch-and-cut method

The branch-and-cut algorithm is a combination of the cutting plane method and the widely known branch-and-bound algorithm. The cutting plane method for the TSP was introduced by [24]. The method solves the linear programming relaxation of a problem iteratively, and adds cuts after each iteration. With each cut, the feasible region of the linear programming relaxation is shrunk without deleting possible solution tours. An inequality that cuts the solution space of the linear programming relaxation, but does not cut any feasible solutions to the original integer programming problem, is called a valid inequality, or facet-defining. The procedure of solving the linear programming relaxation and searching for valid cuts is repeated until a feasible solution for the original problem is obtained. In the branch-and-cut algorithm, the first step is to initialize a linear programming relaxation of the original problem. Initially, a cutting plane procedure is used until no more valid inequalities can be found anymore. The best solution for the original problem and the relaxed problem is stored. Then, the first branching step is taken on a fractional variable, which means this fractional variable is restricted to be either 0 or 1. This yields two new nodes in the so-called branch-and-cut tree. In every node, the new linear programming relaxation of a problem is solved. If the solution of the relaxed problem is higher than the best solution found for the original problem, this means there is no room for improvement in this branch of the tree and the node is pruned, i.e. cut off. Otherwise, the procedure of branching, solving and looking for valid inequalities is repeated. Whenever a better incumbent solution is found, the bounds throughout the tree are updated. The step by step algorithm of the branch-and-cut is summarized in Algorithm 1 below.

It is observed that at some point in the computation, the truncated cutting plane method may not longer be satisfied with the quality of the cutting planes that is obtained i.e in a situation where its doesn't provide any cut, or in general if the amount of increase in LP lower bound is insignificant when compared to the remaining gap to the value of the best known solution to our problem. So therefore, the branch and cut scheme is used for turning the truncated cutting plane method into a full solution procedure for the TSP.

Given a finite subset \(S\) of points in some \({\mathbb{R}}^{n}\), and our problem is to
\begin{equation} \begin{array}{ll} \text{Min} & c^{T}y \\ \text{Subject to} & y \in S \end{array} \end{equation}
(10)
for some cost vector \(c \in {\mathbb{R}}^{n}\). It is trivial to create an initial linear system \(Ay \leq b\) that is satisfied by all points in S, and we assume that \(P = \{y:Ay \leq b \}\) is bounded. And since \(S\) is finite, it is a simple matter to choose a system that meets this requirements. The bounding process here is to solve the the LP relaxations, starting with
\begin{equation} \begin{array}{ll} \text{Minimize} & c^{T}y \\ \text{Subject to} & Ay \leq b \end{array} \end{equation}
(11)
and in general
\begin{equation} \begin{array}{ll} \text{Minimize} & c^{T}y \\ \text{Subject to} & Cy \leq d \hspace{0.4cm} \text{for some linear system} \hspace{0.4cm} Cy \leq d. \end{array} \end{equation}
(12)
And in the main step of this algorithm, there are three cases to be considered depending on the LP solution \(y^{*}\).

Case 1.

Suppose the LP bound is less than the value \( u = c^{T}y^{*}\) of the best point \( \bar{y} \in S\) found thus far in search, and suppose the LP solution is not a point in \(S\). Since the LP relaxation did not provide any optimal solution to the subproblem
\begin{equation} \begin{array}{ll} \text{Minimize} & c^{T}y \\ \text{Subject to} & Cy \leq d \\ \text{and} & y \in S. \end{array} \end{equation}
(13)
We carry out a branching step to continue the search process. To do this, a vector \( \alpha \in {\mathbb{R}}^{n}\) and scalars \( \beta^{'} \) and \( \beta^{''} \) are selected such that each member of \(S\) satisfies either \( \alpha^{T}y \leq \beta^{'}\) or \( \alpha^{T}y \geq \beta^{''}\). New subproblems are created by imposing the external constraint \(\alpha^{T}y \leq \beta^{'}\) in one subproblem and \(\alpha^{T}y \geq \beta^{''}\) in the other subproblem.

Case 2.

Suppose the LP bound is less than \(u\) and suppose \(y^{*}\) is in fact a point in \(S\). Here we update \(u\) and \(\bar{y}\) by setting \(u = c^{T}y^{*}\) and also \(\bar{y} = y^{*}\).

Case 3.

If the LP relaxation is infeasible or if the optimal value \(y^{*} \ge u\), then the subproblem can be discarded (no better point in \(S\) satisfies the constraints defining the subproblem.) In the branch-and-cut scheme, this process is argumented by applying the truncated cutting plane algorithm to each of the subproblems, rather than simply relying on the LP relaxation that is presented. The algorithm is as follows:
Note that we impose the condition that the separation routine FINDCUTS return cutting planes that are valid for the entire set \(S\) rather than for the subproblem solutions \( \{y:Cy \leq d \cap S \}\). This standard practice makes it possible to share the cutting planes that are found with other subproblems by maintaining a list of them in a cut pool that can be searched as one of the separation routines.

2.3. Machine learning

In the machine learning method, we used a combination of Neural network and reinforcement learning called the Neural Combinatorial optimization which solves combinatorial optimization problems. Based on our approach, we considered the combination of a policy gradient of [23] which is called the RL pre-training which uses a training set to optimize a Recurrent neural network (RNN) that parameterize a stochastic policy over solutions using the expected reward as the objective. When testing, the policy is fixed and one performs inference by a greedy decoding or sampling and using a local search which involves no pre-training and start from a random policy and its iteratively optimizes the RNN parameters on a single test instance using again the expected reward as the reward objective. The goal of neural combinatorial optimization is to train an agent to match an input sequence to its corresponding optimal output sequence.

In reinforcement learning, the idea behind it is, an agent will learn from the environment by interacting with it and receiving rewards for performing actions.
2.3.1. Neural network architecture for TSP
Based on the previous discussion, we will make emphasis on the 2D Euclidean TSP. Given an input graph, which is represented as a set of cities \(m\) in 2D given by \( x = \{y_{i}\}_{i=1}^{m}\) where \(y_{i} \in {\mathbb{R}}^{2}\), we are concerned in finding a permutation of the points \(\gamma\) (called tour), that visits each city exactly once and has the least total tour length. Tour length is defined by permutation \(\gamma\) as
\begin{equation} L(\gamma \vert x) = \vert\vert y_{\gamma(m)} - y_{\gamma(1)} \vert\vert_{2} + \sum_{j=1}^{m-1} \vert\vert y_{\gamma(j)} - x_{\gamma(j+1)} \vert\vert_{2}, \end{equation}
(14)
where \(\vert\vert . \vert \vert_{2}\) denote the \(\ell_2\) norm. We aim to learn (from the input points) the parameters of stochastic policy \(\mathcal{P}(\gamma \vert x)\) which assigns higher probabilities to short tours and vice-versa. We then use the chain rule, which is used in sequence to sequence problems to factorize the probability of a tour in our neural network architecture as follows;
\begin{equation} \label{lhs} \mathcal{P}(\gamma \vert x) = \prod_{j-1}^{m} \mathcal{P}(\gamma(j) \vert \gamma (< j), x), \end{equation}
(15)
then we use the individual softmax modules for each terms in (15). We use the approach of [20] called the pointer network which then allows the model to effectively point to a specific position in the input sequence rather than predicting an index value from a fixed size vocabulary. We use the pointer architecture in [20] as our policy model to parametrize our neural network architecture \( \mathcal{P}(\gamma \vert x) \).

This pointer network has two recurrent neural network (RNN) modules which are the encoder and decoder, both of which consist of a long short time memory (LSTM) cells.

The encoder network reads input sequence x, one city at a time, and transforms it into sequence of latent memory states \(\{ enc_{i} \}_{i=1} ^{n} \) where \(enc_{i} \in {\mathbb{R}}^{d}\). The input of the encoder network at any time step \(i\) is a d-dimensional embedding of 2D point \( x_i \), which is obtained through a linear transformation of \(x_{i}\) which is shared across all input steps. This decoder network also maintains its latent memory states \(\{dec_{i} \}_{i=1} ^{n}\) where \( deci_{i} \in {\mathbb{R}}^{d}\) and at each of the step \(i\), it uses a pointer mechanism used to produce a distribution over the next city to visit in the tour. And once the next city is selected, it is passed out as the input to the next decoder step. The input of the first decoder step as shown in [20]vinyals2015pointer} pointer network architecture which is in a d-dimensional vector that is treated as a trainable parameter of our neural network.

The attention function takes the query vector \( q = dec_{i} \in R^{d}\) as input and a set of reference vectors \( ref = \{enc_{1},...,enc_{k}\} \) where \(enc_{i} \in {\mathbb{R}}^{d}\), and it predicts a distribution \( A(ref, q) \) over the set of \(k\) references. And this probability distribution represent the degree to which the model is pointing to reference \(r_{i}\) upon seeing a query \(q\) [22].

2.4. Optimization with policy gradients

We use a supervised loss function comprising of the conditional log-likelihood of [20] which factors into cross entropy objective between the networks output probabilities and the targets provided by a TSP solver. For NP-hard problems, its undesirable to learn from examples because,
  • The performance of the model is tied to the quality of the supervised labels.
  • Getting high quality labelled data is expensive and may be infeasible for new problem statements.
  • One cares about finding a competitive solution more than replicating the results of another algorithm.
By contrast it is believe that reinforcement learning (RL) provides an appropriate paradigm for training neural networks for combinatorial optimization, because these problems have relatively simple reward mechanisms that could be even used at test time. Hence, we proposed to use model free policy based reinforcement learning to optimize the parameters of the pointer network denoted \(\theta\). Our training objective is the expected tour length which given an input graph \(s\), is defined as
\begin{equation} \label{three} J(\theta \vert x) = \mathbb{ E}_{\gamma \sim \mathcal{P}\theta(. \vert x) } L(\gamma \vert x). \end{equation}
(16)
The graphs are drawn from a distribution \(S\), and the total training objectives involves sampling from the distribution of graphs i.e \( J(\theta) = \mathbb{E}_{x \sim X} L(\gamma \vert x)\). We choose to use the policy gradient methods and stochastic gradient decent to optimize the parameters. We then use the well known REINFORCE algorithm of [23] to formulate the gradient of Equation (16):
\begin{equation} \label{g4} \nabla_{\theta} J(\theta \vert x) = \mathbb{E}_{\gamma \sim \mathcal{P}\theta(. \vert x)}[ (L(\gamma \vert x) - b(x) ) \nabla_{\theta} \,\, logp_{\theta}(\gamma \vert x) ], \end{equation}
(17)
where \(b(x)\) denotes a baseline function that does not depends on \( \gamma\) and estimates the expected tour length to reduce the variance of the gradients. By drawing \(B\) i.i.d. sample graphs \( x_{1},x_{2},..., x_{B} \sim X \) and sampling a single tour paragraph, i.e \( \gamma \sim \mathcal{P}_{\theta(. \vert x_{j})} \), the gradient in Equation (17) is approximated with Monte Carlo sampling as follows:
\begin{equation} \nabla_{\theta} J(\theta) \approx \frac{1}{B} \sum_{j=1}^{B} (L(\gamma \vert x) - b(x) ) \nabla_{\theta} \,\, logp_{\theta}(\gamma \vert x). \end{equation}
(18)
We then use the exponential moving average of the regards obtained by the network as a choice of the baseline over time to account for the fact that the policy improves with training . Using a parametric baseline to estimate the expected tour length \( \mathbb{E}_{\gamma \sim \mathcal{P} \theta (.\vert x)} L(\gamma \vert x) \) typically improves learning. We therefore, introduce an auxiliary network, called a critic and parameterized it by \( \theta_{v}\), which is to learn the expected tour length found by our current policy \( \mathcal{P}\theta \) given an input sequence X. The critic is trained with stochastic gradient decent on a mean squared error objective between its predictions \( b\theta_{v}(x)\) and the actual tour lengths sampled by the most recent policy. These additional objective function for optimizing the baseline parameters denoted \(\theta_{v}\) is formulated as
\begin{equation} L(\theta_{v}) = \frac{1}{B} \sum_{i = 1}^{B} \vert\vert b_{\theta v} (x_{j}) - L(\gamma_{i} \vert x_{i} ) \vert\vert_{2}^{2}. \end{equation}
(19)
The train algorithm is as follows:
We perform our updates asynchronously across multiple workers, but each worker also handles a mini-batch of groups for better gradient estimate [22].

2.5. Search strategy

Since evaluating a tour length is not expensive, then out TSP agent can easily simulate a search procedure at inference time by considering multiple candidate solutions per graph and selecting the best which resembles how solvers search over large set of feasible solutions. We consider mainly two search strategies, which are as follows.

2.6. Sampling

We sample multiple candidate tours from our stochastic policy and select the shortest one. We do not actually enforce our model as compare to heuristics solvers to sample different tours during the process. However, the diversity of sampled tour can be controlled with temperature hyperparameters when sampling from our non-parametric sofmax as below.

2.7. Local search (2-opt)

Local search is one of the oldest and the most intuitive optimization technique which consist in starting from a solution and continue to improve it by performing a typical local perturbations which is call moves. In optimization, 2-opt is one of the simple local search algorithm for solving the travelling salesman problem. The main idea behind this method is to take route that crosses over itself and reorder it so that it does not i.e, it gradually improve an initially given feasible answer (local search) until it reaches a local optimum and there is no more improvements can be made. The improvement are done using what is called ``inversion''. It is an optimized solution for both symmetric and asymmetric instances. In our case, we are more concerned in using it for 2D symmetric TSP problem. the technique is not guaranty to find global optimum answer but instead it returns an answer that is usually said to be 2 - optimal and making it heuristic.

2.8. Local search (2-opt) algorithm

  • We fine a trial solution \( x \in X \), for which \(M(x)\) is as small as we can make it at a first try.
  • We than apply some inversions(transformation), which transforms this trial solutions into some other elements of \(X\), whose measures are progressively smaller.
  • Check C for element which might be included in the final \(x\) at an advantage. if there are any such elements try to find a transformation which decreases the measure of the sequence.
Its noted that during training of RL , supervision is not required. Although, it still requires training data and hence generalization depends on the train data distribution. Since the set of cities are encoded, we randomly shuffle the input sequence before feeding it to our pointer network. Which increases the stochasticity of sampling procedure and it leads to large improvements in active search.

3. Results and discussion

The results obtained by applying the exact and machine learning method in solving the 2D symmetric TSP problem will be presented. Several TSPLIB instances was used in testing our solutions and compared with solution found in other literatures. This enable us to provide comparison between optimal solutions and the obtained results. The following instance was considered; Wi29, DJ38, Berlin52, Pr76, KroA100, pr136, pr144, ch150, qa19 and KroA200 from (TSPLIB 1) and (TSPLIB 2) as part of the exact method experiments. Also, for the Heuristic method, we employed the Google Or-tool algorithm in solving the TSP problem. The distance matrix is computed using Euclidean distance. The distance between two points is the length of the path connecting them. The shortest path distance is a straight line in a 2D plane, that is the distance between points \((x_{1}, y_{1}) \, \text{and} \, (x_{2}, y_{2})\) is given by the Euclidean;
\begin{equation} \text{d} = \sqrt{(x_{2}- x_{1})^{2} + (y_{1} - y_{2})^{2}}. \end{equation}
(20)
For the Gap or Error column, we use the below percentage error formula,
\begin{equation} \text{Error} = \left| \frac{Upper Bound - Lower Bound}{Lower Bound} \right| \times 100%. \end{equation}
(21)

3.1. Exact method with TSP

This section presents the performance tests of branch and cut algorithm on Euclidean instances of TSPLIB library. The tests were performed on a computer processor Intel(R) core (TM) i5-2450m cpu 2.60GHZ @ 2.60GHT and 8GB of Ram. The adaptation of the proposed algorithm is coded into a python programming language version \(3.6 \) . Result obtained by applying the exact method in solving the symmetric TSP problem is summarized in the tables below.
Table 1. Exact method.
Instances Number of Instance Best Known Solution Exact Method Time(sec) Error(%)
Wi29 29 27603 27603 0.10 0.0
DJ38 38 6656 6656 0.12 0.0
Berlin52 52 7542 7542 0.15 0.0
pr76 76 21282 21282 0.66 0.0
KroA100 100 108159 108159 0.62 0.0
pr136 136 96772 96772 0.44 0.0
pr144 144 58537 58537 1.63 0.0
ch150 150 6528 6528 7.44 0.0
qa194 194 9352 9352 1.83 0.0
KroA200 200 29437 29437 1.61 0.0
Table 1 above shows the results obtained when Exact method was applied on TSP instance. The method give optimality. Similarly, the Table 2 shows the comparison between the exact and google's tool. The exact outperformed the google OR tool. Tables 1 and 2 shows a summary of the results obtained when we applied our exact method algorithm on the TSPLIB instances. From Table 1, the columns in bold are the optimal values and the gap between our method and the optimal value respectively. We can see that the \(6^{th} \) column has errors equal to zero which shows that our exact method has converged to it optimal solution and has no gap with the global optimum. Also, from the \(5^{th}\) column we can conclude that as the number of instances increases, the computational time of our exact methods tends to increase with an exception of the \(8^{th}\) row. Furthermore, from Table 2, we can see that there is a little gap between our exact method and the heuristic method. Which is much larger in the pr144 instance with \(12.05%\) gap or error. Column five of Table 2 shows the execution time of our exact method based on the TSPLIB instances and we see that the time for execution of an instance increases with increases in the number of instance or cities but with a little drop in pr144 execution time. Also, Column six of Table 2 shows the optimal gap between heuristic and exact method.
Table 2. Exact vs. google's or tools on TSP.
Instances Number of Instance Exact Method google's OR Tools Time(sec) Error(%)
Wi29 29 27603 27734 0.02 0.47
DJ38 38 6656 6645 0.03 0.17
Berlin52 52 7542 7924 0.03 5.06
Pr76 76 21282 21923 0.07 3.01
KroA100 100 108159 110928 0.22 2.56
pr136 136 96772 101641 0.44 5.03
pr144 144 58537 65592 0.50 12.05
ch150 150 6528 6638 0.42 1.69
qa194 194 9352 9966 0.99 6.57
KroA200 200 29437 31014 1.07 5.36
Figures 1 is the optimal tours for two instances, Figure 1a with \(194\) nodes and Figure 1b with \(200\) nodes.

Figure 1. Optimal routes

3.2. Machine learning method on TSP

We have conducted some experiment to investigate the behaviour of our machine learning method(Combination of Neural Network and reinforcement learning in combinatorial optimization) in solving TSP. Five(5) benchmark tasks where considered(Euclidean TSP5,10,20,50,100) data point where drawn from a uniform random distribution in the unit of square \([0,1]\times [0,1] \).

In all the experiment, mini-batched of 128 sequences where used and 128 hidden layers of LSTM cells and the two coordinate of each point were embedded also in a 128-dimensional space. We used ADAM optimization with a learning rate of \(0.001\) for all the dataset which decay in every 5000 step by factor of \(0.96\). Also, for the RL, a training set of mini-batches was generated and the model parameter was updated using the Actor Critic Algorithm \ref{a2} with a test instance. We sampled \(200\) batches of the solutions from the pretrained model.

The model was finally allowed to train much longer since it start from scratch. For each test graph, we run a local search(2-opt) for \(100\) training steps on TSP\(5,10,20,50 \, \, \text{and} \,100.\) The summary of the whole process is as follows:

Table 3 shows a summary of the results obtained after applying our machine learning method. In Table 3, the Task column shows or indicates the TSPLIB instances we have considered in our research and its contained the number of cities to be visited along side with the problem name. The second column shows the optimal solutions known for the TSP instance based on [20]. Column \(3, 4\) represent the best result obtained by using the meta heuristics and Neural combinatorial optimization with reinforcement learning and the percentage gap between the optimal solution and our method respectively. It is observed that that our method performs better than that of [20] for instance less than or equal \(10\)(column 3 with bold numbers) with percentage error or gap of \(1.89\) and \(0.35\) for TSP5 and TSP10 respectively.

The last column shows that, the model performs relatively very well even though the model wasn't trained directly on the same instance size as in [22]. The solution plot for Optimal, ML and error features in Table 3 is shown in Figure 2. Table 3 shows the execution time in seconds of the machine learning algorithm. As the number of instances increases, the execution time increase. However, the execution time may vary depending on the number of iterations specified in the algorithm and the specification of the device used.
Table 3. Average tour length of our method(ML) and best known result from [20].
Task Best known solution}} ML result ML time Error(%) Training(TSP)
TSP5 2.12 2.08 21.78 1.89 5, 10, 50 and 100
TSP10 2.87 2.86 25.15 0.35 10
TSP20 3.83 3.93 42.65 2.61 10 and 100
TSP50 5.68 6.02 155.65 5.99 20
TSP100 7.77 8.60 203.01 10.68 100
From Figure 2 we can see that the gap appears to be negative for instances less than \(12\) and at a point it becomes zero which shows that our model performs better for small instances and as the number of instances increases, the gap also increases.
  • The result above shows that as the number of cities increases, the number of iterations required increases sharply, And the increase is not linear increase
  • The algorithm developed is non-deterministic,thus it does not promise an optimal solution every time. Although, if it does give near optimal solution in most of the cases, it may fail to converge and give a correct solution.
  • This neural network approach is very fast compared to standard programming techniques used for TSP solutions.

Figure 2. Solution plots

3.3. Machine learning and exact method

Using the same instances as that of Table 1 on our machine learning algorithm gives the following result as compared to our exact method. The Table 4, column 3 and 4 represent the minimum tour length for each of the instances in column 1 for the exact and machine learning method. The Error column gives the gap between our exact and machine learning method and we can see that the gap is actually very small \((0.22 - 1.56)\) which is negligible in real life application as compared to that of Heuristics.

Figure 3 shows the plot of our exact, machine learning and google's OR tools. We can see from the plot that, the Gap between the Heuristic and our Exact method is very wide as compare to the gap between our Machine learning method. Also, looking further into the exact and machine learning, we can see that the gap is negligible and it overlaps at some point in the plot.

Table 4. Exact method vs. machine learning.
Instances Number of Instance Exact Machine Learning Error(%)
Wi29 29 27603 27698 0.34
DJ38 38 6656 6685 0.44
Berlin52 52 7542 7579 0.50
pr76 76 21282 21328 0.22
KroA100 100 108159 108673 0.48
pr136 136 96772 96856 0.09
pr144 144 58537 58697 0.27
ch150 150 6528 6601 1.12
qa194 194 9352 9498 1.56
KroA200 200 29437 29687 0.85

Figure 3. Solution plots for Exact, Heuristics and ML showing optimal gap

Figure 4 is a sample output of 6 plot based on the 100 iterations for the prediction of a TSP with 50 instances trained with 50.

Figure 4. A plot of 50 cities based on our Machine Learning model.

Figure 5. Optimal tour for \(i10\) and \(i20\) instances

Figure 6. Optimal tour for \(i30\) and \(i50\) instances

Figures 5 and 6 shows the plot of predicting the permutation of 50 cities at every iteration (10, 20, 30, 50, 80) including the reward before our heuristic search and the reward after our heuristic search.

4. Conclusion

In this paper, we have investigated some solution methods of solving TSP. Where our concentration was based on the Branch and Cut (Exact Method) and machine learning method 2-opt ( Heuristics algorithm). Machine learning itself involves several heuristics algorithm choice which is used for solving combinatorial optimization problems. We compared our solution with optimal solution of TSPLIB instances found previously in other researchers. The branch-and-cut algorithm gave an optimal solution and the machine learning give a near-optimal solution with optimal gap less than \(1%.\) This improved algorithm was able to solve TSP instance upto 200 nodes, it can be used when fast and near-optimal solution is required. In terms of optimal route, this machine learning method can be used in order to obtain a near-optimal solution on a large-sized problem with little experimental time compared with branch-and-cut and Google's tool. Our future work might consider multiple Traveling Salesman Problem with time windows \([a_i, b_i]\). In this problem, each vehicle must visit its defined location within a particular period, and a vehicle may arrive before \(a_i\) and wait till time \(a_i\). The vehicle is not allowed to arrive after \(b_i\).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest:

The authors declare no conflict of interest.

References

  1. Christofides, N. (1976). Worst-case analysis of a new heuristic for the travelling salesman problem (No. RR-388). Carnegie-Mellon Univ Pittsburgh Pa Management Sciences Research Group. [Google Scholor]
  2. Chauhan, C., Gupta, R., & Pathak, K. (2012). Survey of methods of solving tsp along with its implementation using dynamic programming approach. International journal of computer applications, 52(4), 12-19.[Google Scholor]
  3. Edwards, C., & Spurgeon, S. (1998). Sliding mode control: theory and applications, Crc Press.[Google Scholor]
  4. Christos H., & Papadimitriou (1977). The Euclidean Travelling Salesman Problem is NP-complete, Theoretical Computer Science, 4(3), 237-244.[Google Scholor]
  5. Geldenhuys, C. E. (1998). An implementation of a branch-and-cut algorithm for the travelling salesman problem, Ph.D. thesis, University of Johannesburg.[Google Scholor]
  6. Padberg, M., & Rinaldi, G. (1991). A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM review, 33(1), 60-100.[Google Scholor]
  7. Grotschel, M., & Holland, O. (1991). Solution of large-scale symmetric travelling salesman problems. Mathematical Programming, 51(1-3), 141-202.[Google Scholor]
  8. Nazari, M., Oroojlooy, A., Snyder, L., & Takác, M. (2018). Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems (pp. 9839-9849).[Google Scholor]
  9. Ralphs, T. K., Kopman, L., Pulleyblank, W. R., & Trotter, L. E. (2003). On the capacitated vehicle routing problem. Mathematical programming, 94(2-3), 343-359.[Google Scholor]
  10. Ibrahim, A.A., Abdulaziz, R.O., Ishaya, J.A., & Samuel, O.S. (2019). Vehicle Routing Problem with Exact Methods. IOSR Journal of Mathematics (IOSR-JM), 15(3), 05-15.[Google Scholor]
  11. Vakhutinsky, A. I., & Golden, B. L. (1995). A hierarchical strategy for solving traveling salesman problems using elastic nets. Journal of Heuristics, 1(1), 67-76.[Google Scholor]
  12. Hopfield, J. J., & Tank, D. W. (1985). “Neural” computation of decisions in optimization problems. Biological cybernetics, 52(3), 141-152.[Google Scholor]
  13. Wilson, G. V., & Pawley, G. S. (1988). On the stability of the travelling salesman problem algorithm of Hopfield and Tank. Biological Cybernetics, 58(1), 63-70.[Google Scholor]
  14. Abdoun, O., & Abouchabaka, J. (2012). A comparative study of adaptive crossover operators for genetic algorithms to resolve the traveling salesman problem. arXiv preprint arXiv:1203.3097. [Google Scholor]
  15. Gendreau, M., Guertin, F., Potvin, J. Y., & Séguin, R. (2006). Neighborhood search heuristics for a dynamic vehicle dispatching problem with pick-ups and deliveries. Transportation Research Part C: Emerging Technologies, 14(3), 157-174. [Google Scholor]
  16. Hvattum, L. M., Løkketangen, A., & Laporte, G. (2006). Solving a dynamic and stochastic vehicle routing problem with a sample scenario hedging heuristic. Transportation Science, 40(4), 421-438. [Google Scholor]
  17. Brotcorne, L., Laporte, G., & Semet, F. (2003). Ambulance location and relocation models. European journal of operational research, 147(3), 451-463.[Google Scholor]
  18. Claassen, G.D.H., & Hendriks, H.B. (2007). An application of special ordered sets to a periodic milk collection problem, {\it European Journal of Operational Research, 180(2), 754-769.[Google Scholor]
  19. Curtin, K. M., Voicu, G., Rice, M. T., & Stefanidis, A. (2014). A comparative analysis of traveling salesman solutions from geographic information systems. Transactions in GIS, 18(2), 286-301.[Google Scholor]
  20. Vinyals, O., Fortunato, M., & Jaitly, N. (2015). Pointer networks. In Advances in Neural Information Processing Systems (pp. 2692-2700).[Google Scholor]
  21. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems (pp. 3104-3112). [Google Scholor]
  22. Bello, I., Pham, H., Le, Q. V., Norouzi, M., & Bengio, S. (2016). Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940. [Google Scholor]
  23. Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4), 229-256. [Google Scholor]
  24. Dantzig, G., Fulkerson, R., & Johnson, S. (1954). Solution of a large-scale traveling-salesman problem. Journal of the operations research society of America, 2(4), 393-410.[Google Scholor]
]]>
Wiener index of uniform hypergraphs induced by trees https://old.pisrt.org/psr-press/journals/odam-vol-2-issue-3-2019/wiener-index-of-uniform-hypergraphs-induced-by-trees/ Sat, 02 Nov 2019 11:48:46 +0000 https://old.pisrt.org/?p=3387
ODAM-Vol. 2 (2019), Issue 3, pp. 19 – 22 Open Access Full-Text PDF
Andrey Alekseevich Dobrynin
Abstract: The Wiener index \(W(G)\) of a graph \(G\) is defined as the sum of distances between its vertices. A tree \(T\) generates \(r\)-uniform hypergraph \(H_{r,k}(T)\) by the following way: hyperedges of cardinality \(r\) correspond to edges of the tree and adjacent hyperedges have \(k\) vertices in common. A relation between quantities \(W(T)\) and \(W(H_{r,k}(T))\) is established.
]]>

Open Journal of Discrete Applied Mathematics

Wiener index of uniform hypergraphs induced by trees

Andrey Alekseevich Dobrynin\(^1\)
Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk, 630090, Russia.; (A.A.D)
\(^{1}\)Corresponding Author: dobr@math.nsc.ru

Abstract

The Wiener index \(W(G)\) of a graph \(G\) is defined as the sum of distances between its vertices. A tree \(T\) generates \(r\)-uniform hypergraph \(H_{r,k}(T)\) by the following way: hyperedges of cardinality \(r\) correspond to edges of the tree and adjacent hyperedges have \(k\) vertices in common. A relation between quantities \(W(T)\) and \(W(H_{r,k}(T))\) is established.

Keywords:

Tree, hypergraph, Wiener index.

1. Introduction

In this paper we are concerned with undirected connected graphs \(G\) with vertex set \(V(G)\) and edge set \(E(G)\). The degree of a vertex is the number of edges that are incident to the vertex. Degree of a vertex \(v\) is denoted by \(\deg(v)\). If \(u\) and \(v\) are vertices of \(G\), then the number of edges in a shortest path connecting them is said to be their distance and is denoted by \(d_G(u,v)\). Distance of a vertex \(v\) is the sum of distances from \(v\) to all vertices of a graph, \(d_G(v)=\sum_{u\in V(G)} d(v,u)\). The Wiener index is a graph invariant defined as the sum of distances between all vertices of \(G\): $$ W(G)= \sum_{u,v \in V(G)} d(u,v)=\frac12 \sum_{v \in V(G)} d_G(v). $$

It was introduced as a structural descriptor for tree-like molecular graphs [1]. Details on the mathematical properties and chemical applications of the Wiener index can be found in books [2, 3, 4, 5, 6, 7] and reviews [8, 9, 10, 11, 12, 13]. A number of articles are devoted to comparing of the index of a graph and its derived graphs such as the line graph, the total graph, thorny and subdivision graphs of various kind (see, for example, [14, 15, 16, 17]). Hypergraphs generalize graphs by extending the definition of an edge from a binary to an \(r\)-ary relation. Wiener index of some classes of hypergraphs was studied in [18, 19, 20]. Chemical applications of hypergraphs were discussed in [21, 22].

Define a class of \(r\)-uniform hypergraphs \(H_{r,k}(T)\) induced by \(n\)-vertex trees \(T\). Edges of a tree correspond to hyperedges of cardinality \(r\) and adjacent hyperedges have \(k\) vertices in common, \(1 \leq k \leq \lfloor r/2\rfloor\). Examples of a tree and the corresponding hypergraph are shown in Figure 1. The number of vertices of \(H_{r,k}(T)\) is equal to \((n-2)(r-k)+r\). We are interesting in finding a relation between quantities \(W(T)\) and \(W(H_{r,k}(T))\).

Figure 1. Tree \(T\) and the induced hypergraph \(H_{7,2}(T)\).

2. Main result

Wiener indices of a tree and its induced hypergraph satisfy the following relation.

Theorem 1. For the induced hypergraph \(H_{r,k}(T)\) of a tree \(T\) with \(n\) vertices, $$ W(H_{r,k}(T)) = (r-k)^2\,W(T) + n\binom{k}{2} - (n-1)\binom{r-2k+1}{2}. $$

This result may be useful for ordering of Wiener indices of hypergraphs. If \(r\) and \(k\) are fixed, then the ordering of the Wiener index of induced hypergraphs \(H_{r,k}\) for \(n\)-vertex trees is completely defined by the ordering of the index of trees. In particular, $$ W(H_{r,k}(S_n)) \le W(H_{r,k}(T)) \le W(H_{r,k}(P_n)) $$ for any \(n\)-vertex tree \(T\), where \(S_n\) and \(P_n\) are the star and the path with \(n\) vertices, $$ W(H_{r,k}(S_n)) = \big( r(n-1)[2n(r - 2k) + 8k - 3r - 1] + k(n-2)[k(2n-3)+1] \big) /2, $$ $$ W(H_{r,k}(P_n)) = n(r-k)[(r-k)n^2 + 10k -4r-3]/6 + 2k^2-k(2r+1)+ r(r+1)/2. $$

3. Proof of Theorem 1.

The edge subdivision operation for an edge \((x,y)\in E(G)\) is the deletion of \((x,y)\) from graph \(G\) and the addition of two edges \((x,v)\) and \((v,y)\) along with the new vertex \(v\). Vertex \(v\) is called the subdivision vertex. Denote by \(T_e\) the tree obtained from the subdivision of edge \(e\) in a tree \(T\). The distance \(d_G(v,U)\) from a vertex \(v \in V(G)\) to a vertex subset \(U \subseteq V(G)\) is defined as \(d_G(v,U)=\sum_{u\in U} d_G(v,u)\).

Lemma 2. Let \(T_{e_1}, T_{e_2}, \dots , T_{e_{n-1}}\) be trees obtained by subdivision of edges \(e_1, e_2, \dots, e_{n-1}\) of \(n\)-vertex tree \(T\) with subdivision vertices \(v_1, v_2, \dots , v_{n-1}\), respectively. Then $$ d_{T_{e_1}}(v_{1}) + d_{T_{e_2}}(v_{2}) + \dots + d_{T_{e_{n-1}}}(v_{{n-1}}) = 2W(T). $$

Proof. Let \(v\) be the subdivision vertex of edge \(e=(x,y)\) of a tree \(T\). Denote by \(V_x\) and \(V_y\) the sets of vertices of two connected components after deleting edge \(e\) from \(T\) where \(x \in V_x\) and \(y \in V_y\). Since \(d_T(x)=d_T(x,V_x) + |V_y| + d_T(y,V_y)\) and \(d_T(y)=d_T(y,V_y) + |V_x| + d_T(x,V_x)\), \(d_T(x,V_x) + d_T(y,V_y) = (d_T(x) + d_T(y) - n)/2\). Then \begin{eqnarray*} d_{T_e}(v) & = & \sum_{u \in V_x} [\,d_{T_e}(v,x) + d_{T}(x,u)\,] + \sum_{u \in V_y} [\,d_{T_e}(v,y) + d_{T}(y,u)\,] \\ & = & d_T(x,V_x) + d_T(y,V_y) + n = (d_T(x) + d_T(y) + n)/2. \end{eqnarray*} Klein et al. [23] proved that \(\sum_{v \in V(T)} \deg(v) d_T(v) = 4W(T) - n(n-1)\) for an arbitrary \(n\)-vertex tree \(T\). Then \begin{eqnarray*} 2\sum_{i =1}^{n-1} d_{T_{e_i}}(v_i) & = & \sum_{(x,y) \in E(T)} (d_T(x) + d_T(y) + n) \\ &= & \sum_{v \in V(T)} \deg(v) d_T(v)+n(n-1) = 4W(T). \\[-10mm] \end{eqnarray*}

For convenience, we assume that pendent hyperedges are also adjacent with fictitious hyperedges shown by dashed lines in Figure 2. Denote by \(B_i\), \(i=1,2,\dots n\), the vertices of a hypergraph \(H=H_{r,k}(T)\) belonging to hyperedge intersections and let \(A = V(H) \setminus B_1 \cup B_2 \cup \dots \cup B_n\). We assume that edge \(E_i\) of the induced hypergraph corresponds to edge \(e_i\) of the source tree \(T\), \(i=1,2,\dots ,n-1\). Let \(d_G(U)=\sum_{u\in U} d_G(u)\) for \(U \subseteq V(G)\). Then the Wiener index of \(H\) can be represented as follows:
\begin{eqnarray} \label{General} W(H) & = & \frac{1}{2} \left( \sum_{i=1}^{n-1} d_H(E_i \cap A) + \sum_{i=1}^{n} d_H(B_i) \right). \end{eqnarray}
(1)

Figure 2. The hyperedges shown by dashed lines

Let \(u\in E_i \cap A\) and \(v_i\) be the subdivision vertex of edge \(e_i\) in \(T\), \(i=1,2,\dots ,n-1\). Then \begin{eqnarray*} d_H(u) & = & (r-2k-1) + k + k + 2(r-k)+ \dots + 2(r-k) + 3(r-k)+ \dots + 3(r-k) + \dots \\ & = & (r-2k-1) - 2(r-2k) + (r-k) + (r-k) + 2(r-k)+ \dots + 2(r-k) \\ & & \mbox{} + 3(r-k)+ \dots + 3(r-k) + 4(r-k)+ \dots + 4(r-k) + \dots \\ & = & (r-k)d_{T_{e_i}}(v_i) - 2(r-2k) + (r-2k-1). \end{eqnarray*} Summing this equality for all vertices of intersection \(E_i \cap A\), we have \(d_H(E_i \cap A)=(r-2k)d_H(u)=(r-2k)[(r-k)d_{T_{e_i}}(v_i) -(r-2k+1)]\). Applying Lemma 1, we can write
\begin{eqnarray} \label{WA} \nonumber \sum_{i=1}^{n-1} d_H(E_i \cap A) & =& (r-2k)\left( (r-k)\sum_{i=1}^{n-1} d_{T_{e_i}}(v_i) - (n-1)(r-2k+1)\right) \\[1mm] & = & (r-2k)\left[ \, 2(r-k)W(T) - (n-1)(r-2k+1) \, \right] . \end{eqnarray}
(2)
Let \(u\in B_i\) and vertex \(v_i\) of \(T\) corresponds to this hyperedge intersection, \(i=1,2,\dots ,n\). Then \begin{eqnarray*} d_H(u) & = & (k-1) + (r-k)+ \dots + (r-k) + 2(r-k) + \dots + 2(r-k) + 3(r-k)+ \dots + 3(r-k) + \dots \\ & = & (r-k)d_T(v_i) + (k-1). \end{eqnarray*} Summing this equality for all vertices of the hyperedge intersection \(B_i\), we have \(d_H(B_i)=k d_H(u)=k[\, (r-k)d_T(v_i) + (k-1)\, ]\). For vertices of all intersections,
\begin{equation} \label{WB} \sum_{i=1}^{n} d_H(B_i) = k\, [\, (r-k)\sum_{i=1}^{n} d_T(v_i) + n(k-1)\, ] = 2k(r-k)W(T) + nk(k-1). \end{equation}
(3)
Substitution expressions (2) and (3) back into Equation (1) completes the proof.

Acknowledgments

This work is supported by the Russian Foundation for Basic Research(project numbers 19-01-00682 and 17-51-560008).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest:

The authors declare no conflict of interest.

References

  1. Wiener, H. (1947). Structural determination of paraffin boiling points. Journal of the American Chemical Society, 69(1), 17-20. [Google Scholor]
  2. Balaban, A. T., Motoc, I., Bonchev, D., & Mekenyan, O. (1983). Topological indices for structure-activity correlations. In Steric effects in drug design (pp. 21-55). Springer, Berlin, Heidelberg.[Google Scholor]
  3. Gutman, I., & Furtula, B. (Eds.) (2012). Distance in Molecular Graphs Theory. Mathematical chemistry monographs, 12, Univ. Kragujevac, Kragujevac, Serbia.[Google Scholor]
  4. Gutman, I., & Furtula, B. (Eds.) (2012). Distance in Molecular Graphs Applications. Mathematical chemistry monographs, 13, Univ. Kragujevac, Kragujevac, Serbia.
  5. Gutman, I., & Polansky, O. E. (1986).Mathematical Concepts in Organic Chemistry, Springer--Verlag, Berlin.[Google Scholor]
  6. Todeschini, R., & Consonni, V. (2008). Handbook of molecular descriptors (Vol. 11). John Wiley & Sons.[Google Scholor]
  7. Trinajstić, N. (1992). Chemical Graph Theory. CRC Press, Boca Raton. [Google Scholor]
  8. Dobrynin, A. A., Entringer, R., & Gutman, I. (2001). Wiener index of trees: theory and applications. Acta Applicandae Mathematica, 66(3), 211-249. [Google Scholor]
  9. Dobrynin, A. A., Gutman, I., Klavžar, S., & Žigert, P. (2002). Wiener index of hexagonal systems. Acta Applicandae Mathematica, 72(3), 247-294.[Google Scholor]
  10. Nikolić, S., & Trinajstić, N. (1995). The Wiener index: Development and applications. Croatica Chemica Acta, 68(1), 105-129.[Google Scholor]
  11. Entringer, R. C., Jackson, D. E., & Snyder, D. A. (1976). Distance in graphs. Czechoslovak Mathematical Journal, 26(2), 283-296.[Google Scholor]
  12. Knor, M., Škrekovski, R., & Tepeh, A. (2015). Mathematical aspects of Wiener index. Ars Mathematica Contemporanea, 11(2), 327--352.[Google Scholor]
  13. Entringer, R. C. (1997). Distance in graphs: trees. Journal of combinatorial mathematics and combinatorial computing , 24, 65--84.[Google Scholor]
  14. Eliasi, M., Raeisi, G., & Taeri, B. (2012). Wiener index of some graph operations. Discrete Applied Mathematics, 160(9), 1333-1344. [Google Scholor]
  15. Gutman, I. (1998). Distance of thorny graphs. Publications de l'Institut Mathématique (Beograd), 63(31-36), 73-74. [Google Scholor]
  16. Knor, M., & Škrekovski, R. (2014). Wiener index of line graphs. In Quantitative Graph Theory: Mathematical Foundations and Applications, 279-301. 279--301. [Google Scholor]
  17. Dobrynin, A. A., & Mel'nikov, L. S. (2012). Wiener index of line graphs. In: Distance in Molecular Graphs Theory, Gutman, I., \& Furtula, B. (Eds.). Univ. Kragujevac, Kragujevac, Serbia, 85--121. [Google Scholor]
  18. Guo, H., Zhou, B., & Lin, H. (2017). The Wiener index of uniform hypergraphs. MATCH Communications in Mathematical and in Computer Chemistry, 78, 133-152.[Google Scholor]
  19. Rani, L. N., Rajkumari, K. J., & Roy, S. (2019). Wiener Index of Hypertree. In Applied Mathematics and Scientific Computing (pp. 497-505). Birkhäuser, Cham.[Google Scholor]
  20. Sun, L., Wu, J., Cai, H., & Luo, Z. (2017). The Wiener index of \(r\)-uniform hypergraphs. Bulletin of the Malaysian Mathematical Sciences Society, 40(3), 1093-1113.[Google Scholor]
  21. Konstantinova, E. V., & Skorobogatov, V. A. (2001). Application of hypergraph theory in chemistry. Discrete Mathematics, 235(1-3), 365-383. [Google Scholor]
  22. Konstantinova, E. (2000). Chemical hypergraph theory. Lecture Notes from Combinatorial & Computational Mathametics Center, http://com2mac. postech. ac. kr. [Google Scholor]
  23. Klein, D. J., Mihalić, Z., Plavšić, D., & Trinajstić, N. (1992). Molecular topological index: A relation with the Wiener index. Journal of chemical information and computer sciences, 32(4), 304-305. [Google Scholor]
]]>
Computing multiplicative topological indices of some chemical nenotubes and networks https://old.pisrt.org/psr-press/journals/odam-vol-2-issue-3-2019/computing-multiplicative-topological-indices-of-some-chemical-nenotubes-and-networks/ Sun, 06 Oct 2019 13:29:36 +0000 https://old.pisrt.org/?p=3265
ODAM-Vol. 2 (2019), Issue 3, pp. 7 – 18 Open Access Full-Text PDF
Zaryab Hussain, Ahsan, Shahid Hussain Arshad
Abstract: The aim of this paper is to calculate the multiplicative topological indices of Zigzag polyhex nanotubes, Armchair polyhex nanotubes, Carbon nanocone networks, two dimensional Silicate network, Chain silicate network, six dimensional Hexagonal network, five dimensional Oxide network and four dimensional Honeycomb network.
]]>

Open Journal of Discrete Applied Mathematics

Computing multiplicative topological indices of some chemical nenotubes and networks

Zaryab Hussain\(^1\), Ahsan, Shahid Hussain Arshad
Department of Mathematics, Punjab College of Commerce New Campus Faisalabad Pakistan.; (Z.H)
Department of Mathematics, Government College University Faisalabad Pakistan.; (Z.H)
Superior Group of Colleges Faisalabad Campus, Faisalabad Pakistan.; (A)
Department of Applied Sciences, National Textile University Faisalabad Pakistan.; (S.H.A)
\(^{1}\)Corresponding Author: zaryabhussain2139@gmail.com; Tel.: +923207488346

Abstract

The aim of this paper is to calculate the multiplicative topological indices of Zigzag polyhex nanotubes, Armchair polyhex nanotubes, Carbon nanocone networks, two dimensional Silicate network, Chain silicate network, six dimensional Hexagonal network, five dimensional Oxide network and four dimensional Honeycomb network.

Keywords:

Chemical graph theory, multiplicative topological index, degree.

1. Introduction

Nowadays graph theory is one of the most ironic and cited branch of mathematics due to its direct applications in our daily life. It is widely used in Computer networking and Chemistry. The area of graph theory related to Chemistry known as Chemical graph theory. This term firstly introduced by Balaban in book [1] in 1976. After it in 1991, Bonchev discussed more concepts in book [2] and in the book [3] by Trinajstić, we found a facet ideas about chemical graph theory its uses and applications in our daily life.

In the recent few years, lot of work has been done in chemical graph theory like in [4], Ali et al. calculated the topological indices of some chemical compounds. Pattabiraman and Suganya in [5] and Kanabur in [6] calculated topological indices of some well known graphs. The concept of multiplicative topological indices of graphs was given in [7, 8, 9, 10]. In [11], Kahasy et al. calculate atom bond connectivity temperature index of some important organic compounds. Topological indices of some families of nanostar have been calculated in [12]. He and Jiang, in [13] calculated degree resistance distance of some trees. Degree-based multiplicative Atom-bond Connectivity index of some Nanostructures has been discussed [14]. In 2018, Hussain and Sabar [15] calculated multiplicative topological indices of single-walled titania nanotube. In [16], Kulli calculated some topological indices of two dimensional Silicate network, Chain silicate network, six dimensional Hexagonal network, five dimensional Oxide network and four dimensional Honeycomb network. Recently, Kulli [17] computed some topological indices of Zigzag polyhex nanotubes, Armchair polyhex nanotubes and Carbon nanocone networks. The main motivation of this work directly came from the papers [18, 19, 20].

2. Preliminaries

Let \(G\left(V\left(G\right),E\left(G\right)\right)\) be a finite, simple and connected graph with \(V\left(G\right)=\left\{v_{1}, v_{2}, \dots, v_{n}\right\}\) is the set of vertices and \(E\left(G\right)=\left\{e_{1}, e_{2}, \dots, e_{m}\right\}\) is the set of edges among the vertices of the graph. Consider a geodesic metric \(d_{G}: V\left(G\right)\times V\left(G\right)\rightarrow \mathbb{R}\) defined as \(d_{G}\left(u, v\right)\) is the number of edges between \(u\) and \(v\) in shortest path for any \(u, v \in V(G)\). All the vertices which are exactly at distance 1 from \(u\in V(G)\) are neighborhoods of \(u\) in graph \(G\) and collection of that vertices is called neighborhood set of \(u\) in graph \(G\) written as \(N_{G}\left(u\right)\). Cardinality of neighborhood set of \(u\in V(G)\) is called degree of \(u\) in graph \(G\) and in this paper it is denoted as \(\xi_{u}\).

First multiplicative Zagreb index is defined as:
\begin{equation}\label{e1} II^{*}_{1}(G)=\prod\limits_{rt\in E(G)} \left(\xi_{r}+\xi_{t}\right). \end{equation}
(1)
Second multiplicative Zagreb index is defined as:
\begin{equation}\label{e2} II_{2}(G)=\prod\limits_{rt\in E(G)} \left(\xi_{r} \ . \ \xi_{t}\right). \end{equation}
(2)
The multiplicative first and second hyper-Zagreb indices are defined as:
\begin{equation}\label{e3} HII_{1}(G)=\prod\limits_{rt\in E(G)} \left(\xi_{r}+\xi_{t}\right)^{2}. \end{equation}
(3)
\begin{equation}\label{e4} HII_{2}(G)=\prod\limits_{rt\in E(G)} \left(\xi_{r} \ . \ \xi_{t}\right)^{2}. \end{equation}
(4)
First and second multiplicative generalized Zagreb indices are the generalized form of first and second multiplicative Zagreb indices as well as first and second multiplicative hyper-Zagreb indices. First and second multiplicative generalized Zagreb indices are defined as:
\begin{equation}\label{e5} MZ^{\alpha}_{1}(G)=\prod\limits_{rt\in E(G)} \left(\xi_{r}+\xi_{t}\right)^{\alpha}. \end{equation}
(5)
\begin{equation}\label{e6} MZ^{\alpha}_{2}(G)=\prod\limits_{rt\in E(G)} \left(\xi_{r} \ . \ \xi_{t}\right)^{\alpha}. \end{equation}
(6)
Multiplicative sum and product connectivity indices are defined as:
\begin{equation}\label{e7} SCII(G)=\prod_{rt\in E(G)}\frac{1}{\sqrt{\xi_{r}+\xi_{t}}}. \end{equation}
(7)
\begin{equation}\label{e8} PCII(G)=\prod_{rt\in E(G)}\frac{1}{\sqrt{\xi_{r} \ . \ \xi_{t}}}. \end{equation}
(8)
The multiplicative atomic bond connectivity index and geometric arithmetic index are defined as:
\begin{equation}\label{e9} ABCII(G)=\prod_{rt\in E(G)}\sqrt{\frac{\xi_{r}+\xi_{t}-2}{\xi_{r} \ . \ \xi_{t}}}. \end{equation}
(9)
\begin{equation}\label{e10} G^{*}AII(G)=\prod_{rt\in E(G)}\left(\frac{2\sqrt{\xi_{r} \ . \ \xi_{t}}}{\xi_{r}+\xi_{t}}\right). \end{equation}
(10)
The general multiplicative geometric arithmetic index is defined as:
\begin{equation}\label{e11} G^{*}A^{\alpha}II(G)=\prod_{rt\in E(G)}\left(\frac{2\sqrt{\xi_{r} \ . \ \xi_{t}}}{\xi_{r}+\xi_{t}}\right)^{\alpha}. \end{equation}
(11)

Fact 1. Let \(\eta_{1}, \eta_{2},\dots, \eta_{n}\) be a sequence. Then \begin{equation*} \prod\limits_{i=1}^{n}\left(\eta_{i}\right)^{\gamma}=\left(\prod\limits_{i=1}^{n} \eta_{i}\right)^{\gamma}, \end{equation*} where \(\gamma\) is a constant.

Proposition 2. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be any simple, connected and finite graph. Then \begin{equation*} SCII(G)=\left(II^{*}_{1}(G)\right)^{-\frac{1}{2}}. \end{equation*}

Proof. \begin{eqnarray*} SCII(G)&=&\prod_{rt\in E(G)}\frac{1}{\sqrt{\xi_{r}+\xi_{t}}}\\ &=&\left(\frac{1}{\sqrt{\xi_{r}+\xi_{t}}}\right)^{\left|E\left(G\right)\right|}\\ &=&\left(\xi_{r}+\xi_{t}\right)^{-\frac{\left|E\left(G\right)\right|}{2}}\\ &=&\left(\prod_{rt\in E(G)}\left(\xi_{r}+\xi_{t}\right)\right)^{-\frac{1}{2}}\\ &=&\left(II^{*}_{1}(G)\right)^{-\frac{1}{2}}. \end{eqnarray*}

Proposition 3. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be any simple, connected and finite graph. Then \begin{equation*} PCII(G)=\left(II_{2}(G)\right)^{-\frac{1}{2}}. \end{equation*}

Proof. \begin{eqnarray*} PCII(G)&=&\prod_{rt\in E(G)}\frac{1}{\sqrt{\xi_{r} \ . \ \xi_{t}}}\\ &=&\left(\frac{1}{\sqrt{\xi_{r} \ . \ \xi_{t}}}\right)^{\left|E\left(G\right)\right|}\\ &=&\left(\xi_{r} \ . \ \xi_{t}\right)^{-\frac{\left|E\left(G\right)\right|}{2}}\\ &=&\left(\prod_{rt\in E(G)}\left(\xi_{r} \ . \ \xi_{t}\right)\right)^{-\frac{1}{2}}\\ &=&\left(II_{2}(G)\right)^{-\frac{1}{2}}. \end{eqnarray*}

For detail concepts of topological indices on graphs we refer [10, 15].

3. Main results

3.1. Zigzag Polyhex Nanotubes

Zigzag polyhex nantube is denoted as \(TUZC_{6}\left[p,q\right]\), where \(p\) is the number of hexagons in a row and \(q\) is the number of hexagons in a column.

Figure 1. 2\(-\)Dimensional networks of \(TUZC_{6}\left[p,q\right]\)

A 2\(-\)Dimensional networks of \(TUZC_{6}\left[p,q\right]\) is shown in Figure 1. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of a \(\left(p, q\right)\) dimensional zigzag polyhex nantube. It is easy to check that \(\left|V\left(G\right)\right|=2p\left(q+1\right)\) and \(\left|E\left(G\right)\right|=p\left(3q+2\right)\). In this structure there are two types of edges on the basis of their degrees, so we can decompose the the set of edges as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=3\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=3\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=4p\) and \(\left|E_{2}\left(G\right)\right|=p\left(3q-2\right).\)

Theorem 4. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Zigzag polyhex nanotube \(TUZC_{6}\left[p,q\right]\). Then the first multiplicative Zagreb index for \(G\) is \(2^{p\left(3q-2\right)}\times 3^{p\left(3q-2\right)}\times 5^{4p}\).

Proof. The first multiplicative Zagreb index is: \begin{eqnarray*} II^{*}_{1}(G)&=&\prod\limits_{rt\in E(G)} \left(\xi_{r}+\xi_{t}\right)\\ &=& \prod\limits_{rt\in E_{1}(G)} \left(\xi_{r}+\xi_{t}\right)\times \prod\limits_{rt\in E_{2}(G)} \left(\xi_{r}+\xi_{t}\right)\\ &=&\left(2+3\right)^{\left|E_{1}(G)\right|}\times \left(3+3\right)^{\left|E_{2}(G)\right|}\\ &=&5^{4p}\times 2^{3pq-2p}\times 3^{3pq-2p}\\ &=&2^{p\left(3q-2\right)}\times 3^{p\left(3q-2\right)}\times 5^{4p}. \end{eqnarray*}

Theorem 5. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Zigzag polyhex nanotube \(TUZC_{6}\left[p,q\right]\). Then the second multiplicative Zagreb index for \(G\) is \(2^{4p}\times 3^{6pq}\).

Proof. The second multiplicative Zagreb index is: \begin{eqnarray*} II_{2}(G)&=&\prod\limits_{rt\in E(G)} \left(\xi_{r} \ . \ \xi_{t}\right)\\ &=& \prod\limits_{rt\in E_{1}(G)} \left(\xi_{r} \ . \ \xi_{t}\right)\times \prod\limits_{rt\in E_{2}(G)} \left(\xi_{r} \ . \ \xi_{t}\right)\\ &=&\left(2 \ . \ 3\right)^{\left|E_{1}(G)\right|}\times \left(3 \ . \ 3\right)^{\left|E_{2}(G)\right|}\\ &=&2^{4p}\times 3^{4p}\times 3^{6pq-4p}\\ &=&2^{4p}\times 3^{6pq}. \end{eqnarray*}

Theorem 6. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Zigzag polyhex nanotube \(TUZC_{6}\left[p,q\right]\). Then the multiplicative atomic bond connectivity index for \(G\) is \(2^{p\left(3q-4\right)}\times 3^{p\left(2-3q\right)}\).

Proof. The multiplicative atomic bond connectivity index is: \begin{eqnarray*} ABCII(G)&=&\prod_{rt\in E(G)}\sqrt{\frac{\xi_{r}+\xi_{t}-2}{\xi_{r} \ . \ \xi_{t}}}\\ &=& \prod_{rt\in E_{1}(G)}\sqrt{\frac{\xi_{r}+\xi_{t}-2}{\xi_{r} \ . \ \xi_{t}}}\times \prod_{rt\in E_{2}(G)}\sqrt{\frac{\xi_{r}+\xi_{t}-2}{\xi_{r} \ . \ \xi_{t}}}\end{eqnarray*}\begin{eqnarray*} &=&\left(\frac{2+3-2}{2 \ . \ 3}\right)^{\frac{\left|E_{1}\left(G\right)\right|}{2}}\times \left(\frac{3+3-2}{3 \ . \ 3}\right)^{\frac{\left|E_{2}\left(G\right)\right|}{2}}\\ &=&\left(\frac{1}{2}\right)^{2p}\times \left(\frac{2}{3}\right)^{p\left(3q-2\right)}\\ &=&2^{p\left(3q-4\right)}\times 3^{p\left(2-3q\right)}. \end{eqnarray*}

Theorem 7. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Zigzag polyhex nanotube \(TUZC_{6}\left[p,q\right]\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{6p}\times 3^{2p}\times 5^{-4p}\).

Proof. The multiplicative geometric arithmetic index is: \begin{eqnarray*} G^{*}AII(G)&=&\prod_{rt\in E(G)}\left(\frac{2\sqrt{\xi_{r} \ . \ \xi_{t}}}{\xi_{r}+\xi_{t}}\right)\\ &=& \prod_{rt\in E_{1}(G)}\left(\frac{2\sqrt{\xi_{r} \ . \ \xi_{t}}}{\xi_{r}+\xi_{t}}\right)\times \prod_{rt\in E_{2}(G)}\left(\frac{2\sqrt{\xi_{r} \ . \ \xi_{t}}}{\xi_{r}+\xi_{t}}\right)\\ &=&\left(\frac{2\sqrt{2 \ . \ 3}}{2+3}\right)^{\left|E_{1}\left(G\right)\right|}\times \left(\frac{2\sqrt{3 \ . \ 3}}{3+3}\right)^{\left|E_{2}\left(G\right)\right|}\\ &=&\left(\frac{2 \ . \ 2^{\frac{1}{2}} \ . \ 3^{\frac{1}{2}}}{5}\right)^{4p}\times \left(\frac{2 \ . \ 3}{3+3}\right)^{2pq-2p} \\ &=&2^{6p}\times 3^{2p}\times 5^{-4p}. \end{eqnarray*}

3.2. Armchair Polyhex Nanotubes

Carbon polyhex nantubes are those nantubes in which the cylindrical surface is entirely made up of hexagons. These type of carbon nantubes have very interesting thermal, electrical and mechanical properties, actually these are very stabile in nature.

Figure 2. 2\(-\)dimensional networks of \(TUAC_{6}\left[p, q\right]\)

A 2\(-\)dimensional networks of \(TUAC_{6}\left[p, q\right]\) is shown in Figure 2. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Armchair polyhex nanotubes. It is easy to check that \(\left|V\left(G\right)\right|=2p\left(q+1\right)\) and \(\left|E\left(G\right)\right|=p\left(3q+2\right)\). In this structure there are three types of edges on the basis of their degrees, so we can decompose the the set of edges as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\bigcup E_{3}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=2\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=3\right\},\\ E_{3}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=3\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=p\), \(\left|E_{2}\left(G\right)\right|=2p\) and \(\left|E_{3}\left(G\right)\right|=p\left(3q-1\right)\). From this edge parttion, we can easily obtain the following results.

Theorem 8. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Armchair polyhex nanotube \(TUAC_{6}\left[p,q\right]\). Then the first multiplicative Zagreb index for \(G\) is \(2^{p\left(1+3q\right)}\times 3^{p\left(3q-1\right)}\times 5^{2p}\).

Theorem 9. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Armchair polyhex nanotube \(TUAC_{6}\left[p,q\right]\). Then the second multiplicative Zagreb index for \(G\) is \(2^{4p}\times 3^{6pq}\).

Theorem 10. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Armchair polyhex nanotube \(TUAC_{6}\left[p,q\right]\). Then the multiplicative atomic bond connectivity index for \(G\) is \(\sqrt{2^{p\left(6q-5\right)}}\times 3^{p\left(1-3q\right)}\).

Theorem 111. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Armchair polyhex nanotube \(TUAC_{6}\left[p,q\right]\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{3p}\times 3^{p}\times 5^{-2p}\).

3.3. Carbon Nanocone Networks

An \(n-\)dimensional one\(-\)pentagone nanocone is denoted as \(CNC_{5}\left[n\right]\), where \(n\) is the number of hexagons layers encompassing the conical surface of nanocone and 5 denotes that there is a pentagon on the tip called its core.

Figure 3. 6\(-\)dimensional one\(-\)pentagonal nanocone network

A 6\(-\)dimensional one\(-\)pentagonal nanocone network is shown in the Figure 3. Now, \(\left|V\left(G\right)\right|=5\left(n+1\right)^{2}\) and \(\left|E\left(G\right)\right|=5\left(\frac{3}{2}n^{2}+\frac{5}{2}n+1\right)\). In this structure there are following three types of edges: \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\bigcup E_{3}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=2\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=3\right\},\\ E_{3}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=3\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=5\), \(\left|E_{2}\left(G\right)\right|=10n\) and \(\left|E_{3}\left(G\right)\right|=5\left(\frac{3}{2}n^{2}+\frac{1}{2}n\right)\) and following results can be obtained immediately.

Theorem 12. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Carbon nanocone networks \(CNC_{5}\left[n\right]\). Then the first multiplicative Zagreb index for \(G\) is \(\sqrt{2^{15n^{2}+5n+20}}\times \sqrt{3^{15n^{2}+5n}}\times 5^{10n}\).

Theorem 13. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of be the graph of Carbon nanocone networks \(CNC_{5}\left[n\right]\). Then the second multiplicative Zagreb index for \(G\) is \(2^{10\left(n+1\right)}\times 3^{15n\left(n+1\right)}\).

Theorem 14. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Carbon nanocone networks \(CNC_{5}\left[n\right]\). Then the multiplicative atomic bond connectivity index for \(G\) is \(\sqrt{2^{5\left(3n^{2}-n-1\right)}}\times \sqrt{3^{-5n\left(3n+1\right)}}\).

Theorem 15. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Carbon nanocone networks \(CNC_{5}\left[n\right]\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{15n}\times 3^{5n}\times 5^{-10n}\).

3.4. Silicate Networks

Silicates are formed by mixing of metal carbonates or metal oxides with sand. Silicate network is denoted as \(SL_{n}\), where \(n\) is the number of hexagons between the center and boundary of \(SL_{n}\).

Figure 4. 2\(-\)dimensional silicate network

A 2\(-\)dimensional silicate network is shown in Figure 4. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Silicate networks, then \(\left|V\left(G\right)\right|=3n\left(5n+1\right)\) and \(\left|E\left(G\right)\right|=36n^{2}\). In this structure there are three types of edges on the basis of their degrees, so we can decompose the the set of edges as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\bigcup E_{3}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=3\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=6\right\},\\ E_{3}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=6, \xi_{t}=6\right\}. \end{eqnarray*} Now, \(\left|E_{1}\left(G\right)\right|=6n\), \(\left|E_{2}\left(G\right)\right|=6n\left(3n+1\right)\) and \(\left|E_{3}\left(G\right)\right|=6n\left(3n-2\right)\), so we have following results.

Theorem 16. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Silicate network \(SL_{n}\). Then the first multiplicative Zagreb index for \(G\) is \(2^{18n\left(2n-1\right)}\times 3^{6n\left(9n+1\right)}\).

Theorem 17. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Silicate network \(SL_{n}\). Then the second multiplicative Zagreb index for \(G\) is \(2^{18n\left(3n-1\right)}\times 3^{72n^{2}}\).

Theorem 18. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Silicate network \(SL_{n}\). Then the multiplicative atomic bond connectivity index for \(G\) is \(2^{-9n\left(2n-1\right)}\times 3^{-36n^{2}}\times 5^{3n\left(3n-2\right)}\times 7^{3n\left(3n+1\right)}\).

Theorem 19. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Silicate network \(SL_{n}\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{9n\left(3n+1\right)}\times 3^{-6n\left(3n+1\right)}\).

3.5. Chain Silicate Networks

Chain is obtained by arranging \(n\) tetrahedral linearly. Chain silicate networks are denoted as \(CS_{n}\).

Figure 5. Chain silicate network

A chain silicate network is shown in Figure 5. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of chain Silicate networks, then \(\left|V\left(G\right)\right|=3n+1\) and \(\left|E\left(G\right)\right|=6n\). In this structure there are three types of edges on the basis of their degrees, so we can decompose the the set of edges as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\bigcup E_{3}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=3\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=6\right\},\\ E_{3}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=6, \xi_{t}=6\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=n+4\), \(\left|E_{2}\left(G\right)\right|=2\left(2n-1\right)\) and \(\left|E_{3}\left(G\right)\right|=n-2\).

Theorem 20. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Chain Silicate networks \(CS_{n}\). Then the first multiplicative Zagreb index for \(G\) is \(2^{3n}\times 3^{2\left(5n-1\right)}\).

Theorem 21. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Chain Silicate networks \(CS_{n}\). Then the second multiplicative Zagreb index for \(G\) is \(2^{6\left(n-1\right)}\times 3^{12n}\).

Theorem 22. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Chain Silicate networks \(CS_{n}\). Then the multiplicative atomic bond connectivity index for \(G\) is \(\sqrt{2^{-3n+12}}\times \sqrt{3^{-12n}}\times \sqrt{5^{n-2}}\times 7^{2n-1}\).

Theorem 23. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Chain Silicate networks \(CS_{n}\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{3\left(2n-1\right)}\times 3^{-2\left(2n-1\right)}\).

3.6. Hexagonal Networks

It is known that there exist three regular plane tailings with composition of same kind of regular polygons such as triangles, squares and hexagonal. Triangular tiling is used in the construction of hexagonal networks. hexagonal network is denoted as \(HX_{n}\), where \(n\) is the number of vertices of in each side of hexagon.

Figure 6. 6\(-\)diminsional hexagonal network

A 6\(-\)dimensional hexagonal network is shown in Figure 6. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of hexagonal network, then \(\left|V\left(G\right)\right|=3n^{2}-3n+1\) and \(\left|E\left(G\right)\right|=3\left(3n^{2}-5n+2\right)\). In this structure there are five type of edges on the basis of their degrees, so we can decompose the the set of edges as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\bigcup E_{3}\left(G\right)\bigcup E_{4}\left(G\right)\bigcup E_{5}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=4\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=6\right\},\\ E_{3}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=4, \xi_{t}=4\right\},\\ E_{4}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=4, \xi_{t}=6\right\},\\ E_{5}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=6, \xi_{t}=6\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=12\), \(\left|E_{2}\left(G\right)\right|=6\), \(\left|E_{3}\left(G\right)\right|=6\left(n-3\right)\), \(\left|E_{4}\left(G\right)\right|=12\left(n-2\right)\) and \(\left|E_{5}\left(G\right)\right|=3\left(3n^{2}-11n+10\right)\).

Theorem 24. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Hexagonal Network \(HX_{n}\). Then the first multiplicative Zagreb index for \(G\) is \(2^{18\left(n^{2}-2n-1\right)}\times 3^{3\left(3n^{2}-11n+14\right)}\times 5^{12\left(n-2\right)}\times 7^{12}\).

Theorem 25. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Hexagonal Network \(HX_{n}\). Then the second multiplicative Zagreb index for \(G\) is \(2^{6\left(3n^{2}-n-9\right)}\times 3^{6\left(3n^{2}-9n+10\right)}\).

Theorem 26. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Hexagonal Network \(HX_{n}\). Then the multiplicative atomic bond connectivity index for \(G\) is \(\sqrt{2^{-3\left(3n^{2}-5n+2\right)}}\times 3^{-3\left(3n^{2}-10n+13\right)}\times \sqrt{5^{3\left(3n^{2}-11n+14\right)}}\times 7^{3}\).

Theorem 27. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Hexagonal Network \(HX_{n}\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{3\left(6n-1\right)}\times 3^{6\left(n-2\right)}\times 5^{-12\left(n-2\right)}\times7^{-12}\).

3.7. Oxide Networks

An oxide network is denoted as \(OX_{n}\), where \(n\) is the number of dimensions.

Figure 7. 5\(-\)diminsional oxide network

A 5\(-\)diminsional oxide network is shown in Figure 7. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of oxide network, then \(\left|V\left(G\right)\right|=9n^{2}+3n\) and \(\left|E\left(G\right)\right|=18n^{2}\). In this structure there are two type of edges on the basis of their degrees, so we can decompose the the set of edges such as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=4\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=4, \xi_{t}=4\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=12n\) and \(\left|E_{2}\left(G\right)\right|=6n\left(3n-2\right)\).

Theorem 28. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Oxide Network \(OX_{n}\). Then the first multiplicative Zagreb index for \(G\) is \(2^{6n\left(9n-4\right)}\times 3^{12n}\).

Theorem 29. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Oxide Network \(OX_{n}\). Then the second multiplicative Zagreb index for \(G\) is \(2^{12n\left(6n-1\right)}\).

Theorem 30. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Oxide Network \(OX_{n}\). Then the multiplicative atomic bond connectivity index for \(G\) is \(2^{-3n\left(9n-4\right)}\times 3^{3n\left(3n-2\right)}\).

Theorem 31. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Oxide Network \(OX_{n}\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{18n}\times 3^{-12n}\).

3.8. Honeycomb Networks

If we recursively use hexagonal tiling in a particular pattern, honeycomb networks are formed. A honeycomb is denoted as \(HC_{n}\), where \(n\) is the number of hexagons between the central and boundary hexagon.

Figure 8. 4\(-\)diminsional honeycomb network

A 4\(-\)dimensional honeycomb network is shown in Figure 8. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of honeycomb network, then \(\left|V\left(G\right)\right|=6n^{2}\) and \(\left|E\left(G\right)\right|=3n\left(3n-1\right)\). In this structure there are three type of edges on the basis of their degrees, so we can decompose the the set of edges as \(E\left(G\right)=E_{1}\left(G\right)\bigcup E_{2}\left(G\right)\bigcup E_{3}\left(G\right)\), where \begin{eqnarray*} E_{1}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=2\right\},\\ E_{2}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=2, \xi_{t}=3\right\},\\ E_{3}\left(G\right)&=&\left\{e=rt\in E\left(G\right) \ \ | \ \ \xi_{r}=3, \xi_{t}=3\right\}. \end{eqnarray*} It is easy to check that \(\left|E_{1}\left(G\right)\right|=6\), \(\left|E_{2}\left(G\right)\right|=12\left(n-1\right)\) and \(\left|E_{3}\left(G\right)\right|=3\left(3n^{2}-5n+2\right)\).

Theorem 32. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Honeycomb Network \(HC_{n}\). Then the first multiplicative Zagreb index for \(G\) is \(2^{3\left(3n^{2}-5n+6\right)}\times 3^{3\left(3n^{2}-5n+2\right)}\times 5^{12\left(n-1\right)}\).

Theorem 33. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Honeycomb Network \(HC_{n}\). Then the second multiplicative Zagreb index for \(G\) is \(2^{12n}\times 3^{18n\left(n-1\right)}\).

Theorem 34. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Honeycomb Network \(HC_{n}\). Then the multiplicative atomic bond connectivity index for \(G\) is \(2^{3\left(3n^{2}-7n+3\right)}\times 3^{-3\left(3n^{2}-5n+2\right)}\).

Theorem 35. Let \(G\left(V\left(G\right), E\left(G\right)\right)\) be the graph of Honeycomb Network \(HC_{n}\). Then the multiplicative geometric arithmetic index for \(G\) is \(2^{18\left(n-1\right)}\times 3^{6\left(n-1\right)}\times 5^{-12\left(n-1\right)}\).

Remark 1. We can compute the following easily by using Propositions 2 and 3.

  • i \(SCII\left(TUZC_{6}\left[p,q\right]\right)=2^{\frac{p\left(2-3q\right)}{2}}\times 3^{\frac{-p\left(2-3q\right)}{2}}\times 5^{-2p}\).
  • ii \(PCII\left(TUZC_{6}\left[p,q\right]\right)=2^{-2p}\times 3^{-3pq}\).
  • iii \(SCII\left(TUAC_{6}\left[p,q\right]\right)=\sqrt{2^{-p\left(1+3q\right)}}\times \sqrt{3^{-p\left(3q-1\right)}}\times 5^{-p}\).
  • iv \(PCII\left(TUAC_{6}\left[p,q\right]\right)=2^{-2p}\times 3^{-3pq}\).
  • v \(SCII\left(CNC_{5}\left[n\right]\right)=\sqrt[4]{2^{-5\left(3n^{2}+n+4\right)}}\times \sqrt[4]{3^{-5n\left(3n+1\right)}}\times 5^{-5n}\).
  • vi \(PCII\left(CNC_{5}\left[n\right]\right)=2^{-5\left(n+1\right)}\times \sqrt{3^{-15n\left(n+1\right)}}\).
  • vii \(SCII\left(SL_{n}\right)=2^{-9n\left(2n-1\right)}\times 3^{-3n\left(9n+1\right)}\).
  • viii \(PCII\left(SL_{n}\right)=2^{-9n\left(3n-1\right)}\times 3^{-36n^{2}}\).
  • ix \(SCII\left(CS_{n}\right)=\sqrt{2^{-3n}}\times 3^{-\left(5n-1\right)}\).
  • x \(PCII\left(CS_{n}\right)=2^{-3\left(n-1\right)}\times 3^{-6n}\).
  • xi \(SCII\left(HX_{n}\right)=2^{-9\left(n^{2}-2n-1\right)}\times \sqrt{3^{-3\left(3n^{2}-11n+14\right)}}\times 5^{-6\left(n-2\right)}\times 7^{-6}\).
  • xii \(PCII\left(HX_{n}\right)=2^{-3\left(3n^{2}-n-9\right)}\times 3^{-3\left(3n^{2}-9n+10\right)}\).
  • xiii \(SCII\left(OX_{n}\right)=2^{-3n\left(9n-4\right)}\times 3^{-6n}\).
  • xiv \(PCII\left(OX_{n}\right)=2^{-6n\left(6n-1\right)}\).
  • xv \(SCII\left(HC_{n}\right)=\sqrt{2^{-3\left(3n^{2}-5n+6\right)}}\times \sqrt{3^{-3\left(3n^{2}-5n+2\right)}}\times 5^{-6\left(n-1\right)}\).
  • xvi \(PCII\left(HC_{n}\right)=2^{-6n}\times 3^{-9n\left(n-1\right)}\).

Remark 2. We can compute the following easily by using Fact 1.

  • i \(HII_{1}\left(TUZC_{6}\left[p, q\right]\right)=2^{2p\left(3q-2\right)}\times 3^{2p\left(3q-2\right)}\times 5^{8p}\).
  • ii \(HII_{2}\left(TUZC_{6}\left[p, q\right]\right)=2^{8p}\times 3^{12pq}\).
  • iii \(MZ^{\alpha}_{1}\left(TUZC_{6}\left[p, q\right]\right)=2^{\alpha p\left(3q-2\right)}\times 3^{\alpha p\left(3q-2\right)}\times 5^{4\alpha p}\).
  • iv \(MZ^{\alpha}_{2}\left(TUZC_{6}\left[p, q\right]\right)=2^{4\alpha p}\times 3^{6\alpha pq}\).
  • v \(G^{*}A^{\alpha}II\left(TUZC_{6}\left[p, q\right]\right)=2^{6\alpha p}\times 3^{2\alpha p}\times 5^{-4\alpha p}\).
  • vi \(HII_{1}\left(TUAC_{6}\left[p, q\right]\right)=2^{2p\left(1+3q\right)}\times 3^{2p\left(3q-1\right)}\times 5^{4p}\).
  • vii \(HII_{2}\left(TUAC_{6}\left[p, q\right]\right)=2^{8p}\times 3^{12pq}\).
  • viii \(MZ^{\alpha}_{1}\left(TUAC_{6}\left[p, q\right]\right)=2^{\alpha p\left(1+3q\right)}\times 3^{\alpha p\left(3q-1\right)}\times 5^{2\alpha p}\).
  • ix \(MZ^{\alpha}_{2}\left(TUAC_{6}\left[p, q\right]\right)=2^{4\alpha p}\times 3^{6\alpha pq}\).
  • x \(G^{*}A^{\alpha}II\left(TUAC_{6}\left[p, q\right]\right)=2^{3\alpha p}\times 3^{\alpha p}\times 5^{-2\alpha p}\).
  • xi \(HII_{1}\left(CNC_{5}\left[n\right]\right)=2^{15n^{2}+5n+20}\times 3^{15n^{2}+5n}\times 5^{20n}\).
  • xii \(HII_{2}\left(CNC_{5}\left[n\right]\right)=2^{20\left(n+1\right)}\times 3^{30n\left(n+1\right)}\).
  • xiii \(MZ^{\alpha}_{1}\left(CNC_{5}\left[n\right]\right)=\sqrt{2^{\alpha\left(15n^{2}+5n+20\right)}}\times \sqrt{3^{\alpha\left(15n^{2}+5n\right)}}\times 5^{10\alpha n}\).
  • xiv \(MZ^{\alpha}_{2}\left(CNC_{5}\left[n\right]\right)=2^{10\alpha \left(n+1\right)}\times 3^{15\alpha n\left(n+1\right)}\).
  • xv \(G^{*}A^{\alpha}II\left(CNC_{5}\left[n\right]\right)= 2^{15\alpha n}\times 3^{5\alpha n}\times 5^{-10\alpha n}\).
  • xvi \(HII_{1}\left(SL_{n}\right)=2^{36n\left(2n-1\right)}\times 3^{12n\left(9n+1\right)}\).
  • xvii \(HII_{2}\left(SL_{n}\right)=2^{36n\left(3n-1\right)}\times 3^{144n^{2}}\).
  • xviii \(MZ^{\alpha}_{1}\left(SL_{n}\right)=2^{18\alpha n\left(2n-1\right)}\times 3^{6n\alpha \left(9n+1\right)}\).
  • xix \(MZ^{\alpha}_{2}\left(SL_{n}\right)=2^{18\alpha n\left(3n-1\right)}\times 3^{72\alpha n^{2}}\).
  • xx \(G^{*}A^{\alpha}II\left(SL_{n}\right)= 2^{9\alpha n\left(3n+1\right)}\times 3^{-6\alpha n\left(3n+1\right)}\).
  • xxi \(HII_{1}\left(CS_{n}\right)=2^{6n}\times 3^{4\left(5n-1\right)}\).
  • xxii \(HII_{2}\left(CS_{n}\right)=2^{12\left(n-1\right)}\times 3^{24n}\).
  • xxiii \(MZ^{\alpha}_{1}\left(CS_{n}\right)=2^{3\alpha n}\times 3^{2\alpha \left(5n-1\right)}\).
  • xxiv \(MZ^{\alpha}_{2}\left(CS_{n}\right)=2^{6\alpha \left(n-1\right)}\times 3^{12\alpha n}\).
  • xxv \(G^{*}A^{\alpha}II\left(CS_{n}\right)=2^{3\alpha \left(2n-1\right)}\times 3^{-2\alpha \left(2n-1\right)} \).
  • xxvi \(HII_{1}\left(HX_{n}\right)=2^{36\left(n^{2}-2n-1\right)}\times 3^{6\left(3n^{2}-11n+14\right)}\times 5^{24\left(n-2\right)}\times 7^{24}\).
  • xxvii \(HII_{2}\left(HX_{n}\right)=2^{12\left(3n^{2}-n-9\right)}\times 3^{12\left(3n^{2}-9n+10\right)}\).
  • xxviii \(MZ^{\alpha}_{1}\left(HX_{n}\right)=2^{18\alpha \left(n^{2}-2n-1\right)}\times 3^{3\alpha \left(3n^{2}-11n+14\right)}\times 5^{12\alpha \left(n-2\right)}\times 7^{12\alpha}\).
  • xxix \(MZ^{\alpha}_{2}\left(HX_{n}\right)=2^{6\alpha \left(3n^{2}-n-9\right)}\times 3^{6\alpha\left(3n^{2}-9n+10\right)}\).
  • xxx \(G^{*}A^{\alpha}II\left(HX_{n}\right)=2^{3\alpha \left(6n-1\right)}\times 3^{6\alpha \left(n-2\right)}\times 5^{-12\alpha \left(n-2\right)}\times7^{-12\alpha } \).
  • xxxi \(HII_{1}\left(OX_{n}\right)=2^{12n\left(9n-4\right)}\times 3^{24n}\).
  • xxxii \(HII_{2}\left(OX_{n}\right)=2^{24n\left(6n-1\right)}\).
  • xxxiii \(MZ^{\alpha}_{1}\left(OX_{n}\right)=2^{6\alpha n\left(9n-4\right)}\times 3^{12\alpha n}\).
  • xxxiv \(MZ^{\alpha}_{2}\left(OX_{n}\right)=2^{12\alpha n\left(6n-1\right)}\).
  • xxxv \(G^{*}A^{\alpha}II\left(OX_{n}\right)= 2^{18\alpha n}\times 3^{-12\alpha n}\).
  • xxxvi \(HII_{1}\left(HC_{n}\right)=2^{6\left(3n^{2}-5n+6\right)}\times 3^{6\left(3n^{2}-5n+2\right)}\times 5^{24\left(n-1\right)}\).
  • xxxvii \(HII_{2}\left(HC_{n}\right)=2^{24n}\times 3^{36n\left(n-1\right)}\).
  • xxxviii \(MZ^{\alpha}_{1}\left(HC_{n}\right)=2^{3\alpha\left(3n^{2}-5n+6\right)}\times 3^{3\alpha\left(3n^{2}-5n+2\right)}\times 5^{12\alpha\left(n-1\right)}\).
  • xxxix \(MZ^{\alpha}_{2}\left(HC_{n}\right)=2^{12\alpha n}\times 3^{18\alpha n\left(n-1\right)}\).
  • xxxx \(G^{*}A^{\alpha}II\left(HC_{n}\right)=2^{18\alpha\left(n-1\right)}\times 3^{6\alpha \left(n-1\right)}\times 5^{-12\alpha\left(n-1\right)}\).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest:

The authors declare no conflict of interest.

References

  1. Balaban, A. T. (1985). Applications of graph theory in chemistry. Journal of chemical information and computer sciences, 25(3), 334-343. [Google Scholor]
  2. Bonchev, D. (1991). Chemical graph theory: introduction and fundamentals (Vol. 1). CRC Press. [Google Scholor]
  3. Trinajstić, N. (2018). Chemical graph theory. Routledge. [Google Scholor]
  4. Ali, H., Baig, A. Q., & Shafiq, M. K. (2017). On topological properties of boron triangular sheet \(BTS(m, n)\), borophene chain \(B_{36}(n)\) and melem chain \(MC(n)\) nanostructures. Journal of Mathematical Nanoscience, 7(1), 39-60.[Google Scholor]
  5. Pattabiraman, K., & Suganya, T. (2018). Edge version of some degree based topological descriptors of graphs. Journal of Mathematical Nanoscienese, 8(1), 1-12. [Google Scholor]
  6. Kanabur, R. (2018). On certain cegree-based topological indices of armchair poly-hex nanotubes. Journal of Mathematical Nanoscienese, 8(1), 19-25. [Google Scholor]
  7. Gutman, I. (2011). Multiplicative Zagreb indices of trees. Bulletin of Society of Mathematicians Banja Luka, 18, 17-23. [Google Scholor]
  8. Iranmanesh, A., Hosseinzadeh, M. A., & Gutman, I. (2012). On multiplicative Zagreb indices of graphs. Iranian Journal of Mathematical Chemistry, 3(2), 145-154.[Google Scholor]
  9. Ghorbani, M., & Azimi, N. (2012). Note on multiple Zagreb indices. Iranian Journal of Mathematical Chemistry, 3(2), 137-143. [Google Scholor]
  10. Hussain, Z., Ijaz, N., Tahir, W., Butt, M. T., & Talib, S. (2018). Calculating Degree Based Multiplicative Topological indices of Alcohol. Asian Journal of Applied Science and Technology 4(2), 132-139. [Google Scholor]
  11. Kahasy, A.T., Narayankar, K., & Selvan, D. (2018). Atom bond connectivity temperature index. Journal of Mathematical Nanoscienese, 8(2), 67-75. [Google Scholor]
  12. Siddiqui, M. K., Rehman, N. A., & Imran, M. (2018). Topological indices of some families of nanostar dendrimers. Journal of Mathematical Nanoscienese, 8(2), 91-103. [Google Scholor]
  13. He, F. & Jiang, X. (2018). Degree resistance distance of trees with given perameters. Transaction on Combinatorics, 7(4), 11-24. [Google Scholor]
  14. Gao, W., Jamil, M. K., Nazeer, W., & Amin, M. (2017). Degree-Based Multiplicative Atom-bond Connectivity Index of Nanostructures. International Journal of Applied Mathematics, 47(4). [Google Scholor]
  15. Hussain, Z., & Sabar, S. (2018). On multiplicative degree based topological indices of single-walled titania nanotubes. Journal of Mathematical Nanoscienese, 8(1), 41-54.[Google Scholor]
  16. Kulli, V. R. (2017). Computation of some topological indices of certain networks. International Journal of Mathematical Archive, 8(2), 99-106.[Google Scholor]
  17. Kulli,V. R. (2019). F-Indices of chemical networks. International Journal of Mathematical Archivem, 10(3), 21-30. [Google Scholor]
]]>
A note on the Kirchhoff index of graphs https://old.pisrt.org/psr-press/journals/odam-vol-2-issue-3-2019/a-note-on-the-kirchhoff-index-of-graphs/ Sun, 06 Oct 2019 13:17:19 +0000 https://old.pisrt.org/?p=3263
ODAM-Vol. 2 (2019), Issue 3, pp. 1 – 6 Open Access Full-Text PDF
Marjan M. Matejić, Emina I. Milovanović, Predrag D. Milošević, Igor Ž. Milovanović
Abstract: Let \(G\) be a simple connected graph with \(n\) vertices, \(m\) edges, and a sequence of vertex degrees \(\Delta=d_1\geq d_2\geq\cdots\geq d_n=\delta >0\). Denote by \(\mu_1\geq \mu_2\geq\cdots\geq \mu_{n-1}>\mu_n=0\) the Laplacian eigenvalues of \(G\). The Kirchhoff index of \(G\) is defined as \(Kf(G)=n\sum_{i=1}^{n-1} \frac{1}{\mu_i}\). A couple of new lower bounds for \(Kf(G)\) that depend on \(n\), \(m\), \(\Delta\) and some other graph invariants are obtained.
]]>

Open Journal of Discrete Applied Mathematics

A note on the Kirchhoff index of graphs

Marjan M. Matejić, Emina I. Milovanović, Predrag D. Milošević, Igor Ž. Milovanović\(^1\)
Faculty of Electronic Engineering, Beogradska 14, P. O. Box 73, 18000 Niš, Serbia.; (M.M.M & E.I.M & P.D.M & I.Ž.M)
\(^{1}\)Corresponding Author: igor@elfak.ni.ac.rs; Tel.: +381529603

Abstract

Let \(G\) be a simple connected graph with \(n\) vertices, \(m\) edges, and a sequence of vertex degrees \(\Delta=d_1\geq d_2\geq\cdots\geq d_n=\delta >0\). Denote by \(\mu_1\geq \mu_2\geq\cdots\geq \mu_{n-1}>\mu_n=0\) the Laplacian eigenvalues of \(G\). The Kirchhoff index of \(G\) is defined as \(Kf(G)=n\sum_{i=1}^{n-1} \frac{1}{\mu_i}\). A couple of new lower bounds for \(Kf(G)\) that depend on \(n\), \(m\), \(\Delta\) and some other graph invariants are obtained.

Keywords:

Kirchhoff index, Zagreb indices, forgotten index.

1. Introduction

Let \(G=(V,E)\), \(V=\{v_1,v_2,\ldots,v_n\}\), be a simple connected graph with \(n\) vertices, \(m\) edges and let \(\Delta=d_1\geq d_2\geq\cdots\geq d_n=\delta >0\), \(d_i=d(v_i)\), be a sequence of vertex degrees of \(G\). If vertices \(v_i\) and \(v_j\) are adjacent we write \(v_i\sim v_j\) or, for brevity, \(i\sim j\).

In graph theory, an invariant is a property of graphs that depends only on their abstract structure, not on the labeling of vertices or edges. Such quantities are also referred to as topological indices. The topological indices are an important class of molecular structure descriptors used for quantifying information on molecules. Many of them are defined as simple functions of the degrees of the vertices of (molecular) graph (see e.g. [1, 2, 3]). Historically, the first vertex-degree-based (VDB) structure descriptors were the graph invariants that are nowadays called Zagreb indices. The first and the second Zagreb index, \(M_1\) and \(M_2\), are defined as $$ M_1(G)=\sum_{i=1}^n d_i^2\,, $$ and $$ M_2(G)=\sum_{i\sim j} d_id_j\,. $$ The quantity \(M_1\) was first time considered in 1972 [4], whereas \(M_2\) in 1975 [5]. These were named Zagreb group indices [6] (in view of the fact that the authors of [4, 5] were members of the "Rudjer Bov sković" Institute in Zagreb, Croatia). Eventually, the name was shortened into first Zagreb index and second Zagreb index [7]. In [4] another topological index defined as sum of cubes of vertex degrees, that is $$ F(G)=\sum_{i=1}^n d_i^3\,, $$ was encountered. However, for the unknown reasons, it did not attracted any attention until 2015 when it was reinvented in [8] and named the {\it forgotten topological index}. % Details of the theory and applications of these topological indices can be found, for example, in [9, 10]. In [11] Fajtlowicz defined a topological index called the inverse degree, \(ID(G)\), as $$ ID(G)=\sum_{i=1}^n \frac{1}{d_i}. $$ Here we are interested in a graph invariant called the Kirchhoff index, which was introduced by Klein and Randić in [12]. It is defined as $$ Kf(G)=\sum_{i< j}r_{ij}, $$ where \(r_{ij}\) is the resistance distance between the vertices \(v_i\) and \(v_j\), i.e. \(r_{ij}\) is equal to the resistance between equivalent points on an associated electrical network obtained by replacing each edge of \(G\) by a unit (1 ohm) resistor. The Kirchhoff index has a very nice purely mathematical interpretation. Namely, in [13] and [14] it was demonstrated that the Kirchhoff index of a connected graph can also be represented as $$ Kf(G)=n\sum_{i=1}^{n-1} \frac{1}{\mu_i}, $$ where \(\mu_1\geq \mu_2\geq\cdots\geq\mu_{n-1}>\mu_n=0\) are the Laplacian eigenvalues of \(G\). In this paper we obtain new lower bounds for \(Kf(G)\) which depend on some of the graph structural parameters and above mentioned topological indices. Before we proceed, let us define one special class of \(d\)-regular graphs \(\Gamma_d\) [15]. Let \(N(i)\) be a set of all neighbors of vertex \(i\), i.e. \(N(i)=\{k\, |\, k\in V,\, k\sim i\}\), and \(d(i,j)\) the distance between vertices \(i\) and \(j\). Denote by \(\Gamma_d\) a set of all \(d\)-regular graphs, \(1\leq d\leq n-1\), with diameter \(D=2\) and \(|N(i)\cap N(j)|=d\) for \(i\nsim j\).

2. Preliminaries

In this section we recall some results from the literature which are needed for the subsequent considerations.

Lemma 1. [16] Let \(p=(p_i)\), \(i=1,2,\ldots,n\), be a nonnegative real number sequence and \(a=(a_i)\), \(i=1,2,\ldots,n\), a positive real number sequence. Then for any real \(r\), such that \(r\geq 1\) or \(r\leq 0\), holds

\begin{equation} \label{2.1} \left(\sum_{i=1}^n p_i\right)^{r-1}\sum_{i=1}^n p_ia_i^{r} \ge \left(\sum_{i=1}^n p_ia_i\right)^{r}. \end{equation}
(1)
If \(0\le r\le 1\), then the sense of (1) reverses. Equality holds if and only if either \(r=0\), or \(r=1\), or \(a_1=a_2=\cdots=a_n\), or \(p_1=p_2=\cdots=p_t=0\) and \(a_{t+1}=a_{t+2}=\cdots=a_n\), for some \(t\), \(1\leq t\leq n-1\).

Lemma 2. [17] Let \(G\) be a simple connected graph with \(n\ge 2\) vertices. Then

\begin{equation} \label{2.2} Kf(G)\geq -1+(n-1)ID(G). \end{equation}
(2)
Equality holds if and only if either \(G\cong K_n\), or \(G\cong K_{t,n-t}\), \(1\leq t\leq\lfloor\frac{n}{2}\rfloor\), or \(G\in \Gamma_d\).

3. Main results

In the next theorem we determine a new lower bound for \(Kf(G)\) in terms of the invariant \(M_1(G)\) and graph parameters \(n\), \(m\) and \(\Delta\).

Theorem 3. Let \(G\) be a simple connected graph with \(n\ge 2\) vertices and \(m\) edges. If \(G\) is \(d\)-regular graph, \(1\le d\le n-1\), then

\begin{equation} \label{3.1} Kf(G)\geq \frac{n(n-1)-d}{d}. \end{equation}
(3)
Otherwise
\begin{equation} \label{3.2} Kf(G)\geq \frac{n(n-1)-\Delta}{\Delta}+\frac{(n-1)(n\Delta-2m)^2}{\Delta(2m\Delta-M_1(G))}. \end{equation}
(4)
Equality in (3) holds if and only if \(G\cong K_n\), or \(G\in \Gamma_d\). Equality in (4) holds if and only if \(G\cong K_{\Delta,n-\Delta}\).

Proof. If \(G\) is \(d\)-regular graph, \(1\le d\le n-1\), then $$ ID(G)=\frac{n}{d}. $$ From the above and (2) we arrive at (3). For \(r=2\), \(p_i:=\frac{\Delta}{d_i}-1\), \(a_i:=d_i\), \(i=1,2,\ldots,n\), the inequality (1) becomes $$ \sum_{i=1}^{n} \left(\frac{\Delta}{d_i}-1 \right)\sum_{i=1}^{n} (\Delta-d_i)d_i\geq \left(\sum_{i=1}^{n} (\Delta-d_i)\right)^{2}, $$ that is

\begin{equation} \label{3.3} (\Delta ID(G)-n)(2m\Delta-M_1(G))\geq (n\Delta-2m)^2. \end{equation}
(5)
If \(G\) is \(d\)-regular graph, \(1\le d\le n-1\), then \(2m\Delta-M_1(G)=0\). Therefore, we assume that \(G\) is not \(d\)-regular graph, \(1\le d\le n-1\). Then, according to (5) we have $$ ID(G)\ge \frac{n}{\Delta}+\frac{(n\Delta-2m)^2}{\Delta(2m\Delta-M_1(G))}. $$ The inequality (4) is obtained from the above and (2). The inequality (3) was proven in [18] with equality holding if and only if \(G\cong K_n\) or \(G\in \Gamma_d\). Since \(G\) is not \(d\)-regular graph, \(1\le d\le n-1\), then equality in (5) is attained if and only if \(\Delta=d_1=d_2=\cdots=d_t>d_{t+1}=\cdots=d_n\), for some \(t\), \(2\leq t\leq n-1\), which implies that equality in (4) holds if and only if \(G\cong K_{\Delta,n-\Delta}\).

Remark 1. According to (4) follows $$ Kf(G)\geq \dfrac{n(n-1)-\Delta}{\Delta}, $$ which was proven in [15].

Corollary 4. Let \(G\) be a simple connected graph with \(n\ge 2\) vertices and \(m\) edges. If \(G\cong K_n\), then $$ Kf(G)=n-1. $$ Otherwise

\begin{equation} \label{3.4} Kf(G)\geq n-1+\frac{(n(n-1)-2m)^2}{2m(n-1)-M_1(G)}. \end{equation}
(6)
Equality holds if and only if \(G\cong K_{1,n-1}\), or \(G\in \Gamma_d\).

Proof. For \(r=2\), \(p_i:=\frac{n-1}{d_i}-1\), \(a_i:=d_i\), \(i=1,2,\ldots,n\), the inequality (1) transforms into $$ \sum_{i=1}^{n} \left(\frac{n-1}{d_i}-1 \right)\sum_{i=1}^{n} (n-1-d_i)d_i\geq \left(\sum_{i=1}^{n} (n-1-d_i)\right)^{2}, $$ that is

\begin{equation} \label{3.5} ((n-1) ID(G)-n)(2m(n-1)-M_1(G))\geq (n(n-1)-2m)^2. \end{equation}
(7)
If \(G\cong K_n\), then \(Kf(G)=n-1\) and \(2m(n-1)-M_1(G)=0\). If \(G\ncong K_n\), from (7) we obtain $$ (n-1)ID(G)\ge n+\frac{(n(n-1)-2m)^2}{2m(n-1)-M_1(G)}. $$ The inequality (6) follows from the above and (2). Equality in (7), \(G\ncong K_n\), is attained if and only if \(\Delta=d_1=d_2=\cdots=d_n\ne n-1\), or \(n-1=\Delta=d_1=d_2=\cdots=d_t>d_{t+1}=\cdots=d_n\), for some \(t\), \(1\leq t\leq n-1\). This implies that equality in (6) holds if and only if \(G\cong K_{1,n-1}\), or \(G\in \Gamma_d\).

Remark 2. In [19] the following was proven

\begin{equation} \label{3.5a} Kf(G)\geq \frac{2mn(n-1)(n-2)}{4m^2-M_1(G)-2m}, \end{equation}
(8)
with equality holding if and only if \(G\cong K_n\). We have performed testing on a large number of connected graphs, but could not find any graph for which the inequality (8) is stronger than (6).

Corollary 5. Let \(G\) be a simple connected graph with \(n\ge 2\) vertices and \(m\) edges. Then

\begin{equation} \label{3.6} Kf(G)\geq \dfrac{n^2(n-1)-2m}{2m}. \end{equation}
(9)
Equality holds if and only if \(G\cong K_n\), or \(G\in \Gamma_d\).

Proof. The inequality (9) is obtained according to (6) and inequality $$ M_1(G)\ge \frac{4m^2}{n}, $$ which was proven in [20] (see also [21, 22]). The inequality (9) was proven in [23] (see also [18]).

The proof of the next theorem is fully analogous to that of the Theorem 3, hence omitted.

Theorem 6. Let \(G\) be a simple connected graph with \(n\ge 2\) vertices and \(m\) edges. If \(G\) is \(d\)-regular graph, \(1\le d\le n-1\), then the inequality (3) holds. Otherwise

\begin{equation} \label{3.7} Kf(G)\geq \frac{n(n-1)-\Delta}{\Delta}+\frac{(n-1)(n\Delta-2m)^{3/2}}{\Delta(\Delta M_1(G)-F(G))^{1/2}}. \end{equation}
(10)
Equality in (10) holds if and only if \(G\cong K_{\Delta,n-\Delta}\).

Corollary 7. Let \(G\) be a simple connected graph with \(n\ge 2\) vertices and \(m\) edges. If \(G\cong K_n\), then $$ Kf(G)=n-1. $$ If \(G\ncong K_n\), then

\begin{equation} \label{3.8} Kf(G)\geq n-1+\frac{(n(n-1)-2m)^{3/2}}{((n-1)M_1(G)-F(G))^{1/2}}. \end{equation}
(11)
Equality holds if and only if \(G\cong K_{1,n-1}\), or \(G\in \Gamma_d\).

Corollary 12. Let \(G\) be a simple connected graph with \(n\ge 2\) vertices and \(m\) edges. If \(G\cong K_n\), then $$ Kf(G)=n-1. $$ If \(G\ncong K_n\), then

\begin{equation} \label{3.9} Kf(G)\geq n-1+\frac{(n(n-1)-2m)^{3/2}}{((n-1)M_1(G)-2M_2(G))^{1/2}}. \end{equation}
(13)
Equality holds if and only if \(G\in \Gamma_d\).

Proof. The inequality (12) is obtained from (11) and inequality \(F(G)\ge 2M_2(G)\).

Remark 3. In [24] a vertex--degree--based topological index called the Lanzhou index, \(Lz(G)\), is defined as $$ Lz(G)=\sum_{i=1}^n (n-1-d_i)d_i^2. $$ According to (11) the following relation between topological indices \(Kf(G)\) and \(Lz(G)\) follows $$ (Kf(G)-n+1)Lz(G)^{1/2}\geq (n(n-1)-2m)^{3/2}, $$ with equality holding if and only if \(G\cong K_n\), or \(G\cong K_{1,n-1}\), or \(G\in \Gamma_d\).

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest:

The authors declare no conflict of interest.

References

  1. Todeschini, R., & Consonni, V. (2008). Handbook of molecular descriptors (Vol. 11). John Wiley & Sons. [Google Scholor]
  2. Vukičević, D. (2010). Bond additive modeling 2. Mathematical properties of max-min rodeg index. Croatica chemica acta, 83(3), 261-273. [Google Scholor]
  3. Vukičević, D., & Gašperov, M. (2010). Bond additive modeling 1. Adriatic indices. Croatica chemica acta, 83(3), 243-260. [Google Scholor]
  4. Gutman, I., & Trinajstić, N. (1972). Graph theory and molecular orbitals. Total \(\phi\)-electron energy of alternant hydrocarbons. Chemical Physics Letters, 17(4), 535-538. [Google Scholor]
  5. Gutman, I., Ruščić, B., Trinajstić, N., & Wilcox Jr, C. F. (1975). Graph theory and molecular orbitals. XII. Acyclic polyenes. The Journal of Chemical Physics, 62(9), 3399-3405. [Google Scholor]
  6. Balaban, A. T., Motoc, I., Bonchev, D., & Mekenyan, O. (1983). Topological indices for structure-activity correlations. In Steric effects in drug design(pp. 21-55). Springer, Berlin, Heidelberg.[Google Scholor]
  7. Gutman, I. (2014). On the origin of two degree–based topological indices. Bulletin (Académie serbe des sciences et des arts. Classe des sciences mathématiques et naturelles. Sciences mathématiques), (39), 39-52. [Google Scholor]
  8. Furtula, B., & Gutman, I. (2015). A forgotten topological index. Journal of Mathematical Chemistry, 53(4), 1184-1190.[Google Scholor]
  9. Borovicanin, B., Das, K. C., Furtula, B., & Gutman, I. (2017). Zagreb indices: Bounds and extremal graphs. Bounds in Chemical Graph Theory–Basics, Univ. Kragujevac, Kragujevac, 67-153. [Google Scholor]
  10. Borovicanin, B., Das, K. C., Furtula, B., & Gutman, I. (2017). Bounds for Zagreb indices. MATCH-Communications in Mathematical and in Computer Chemistry, 78(1), 17-100. [Google Scholor]
  11. Fajtlowicz, S. (1987). On conjectures of Graffiti-II. Congr. Numer, 60, 187-197. [Google Scholor]
  12. Klein, D. J., & Randić, M. (1993). Resistance distance. Journal of mathematical chemistry, 12(1), 81-95. [Google Scholor]
  13. Gutman, I., & Mohar, B. (1996). The quasi-Wiener and the Kirchhoff indices coincide. Journal of Chemical Information and Computer Sciences, 36(5), 982-985.[Google Scholor]
  14. Zhu, H. Y., Klein, D. J., & Lukovits, I. (1996). Extensions of the Wiener number. Journal of Chemical Information and Computer Sciences, 36(3), 420-428. [Google Scholor]
  15. Palacios, J. L. (2016). Some additional bounds for the Kirchhoff index. MATCH-Communications in Mathematical and in Computer Chemistry, 75(2), 365-372. [Google Scholor]
  16. Mitrinovic, D. S., Pecaric, J., & Fink, A. M. (2013). Classical and new inequalities in analysis (Vol. 61). Springer Science & Business Media.[Google Scholor]
  17. Zhou, B., & Trinajstić, N. (2008). A note on Kirchhoff index. Chemical Physics Letters, 455(1-3), 120-123. [Google Scholor]
  18. Milovanovic, I. Z., & Milovanovic, E. I. Bounds of Kirchhoff and degree Kirchhoff indices. Bounds in Chemical Graph Theory–Mainstreams (I. Gutman, B. Furtula, KC Das, E. Milovanovic, I. Milovanovic (Eds.)) Mathematical Chemistry Monographs, MCM, 20, 93-119. [Google Scholor]
  19. Das, K. C. (2013). On the Kirchhoff index of graphs. Zeitschrift für Naturforschung A, 68(8-9), 531-538.[Google Scholor]
  20. Edwards, C. S. (1977). The largest vertex degree sum for a triangle in a graph. Bulletin of the London Mathematical Society, 9(2), 203-208. [Google Scholor]
  21. Ilić, A., & Stevanović, D. (2011). On comparing Zagreb indices, MATCH-Communications in Mathematical and in Computer Chemistry 62, 681--687. [Google Scholor]
  22. Yoon, Y. S., & Kim, J. K. (2006). A relationship between bounds on the sum of squares of degrees of a graph. Journal of Applied Mathematics and Computing, 21(1-2), 233-238.[Google Scholor]
  23. Milovanovic, I. Z., & Milovanovic, E. I. (2017). On some lower bounds of the Kirchhoff index. MATCH-Communications in Mathematical and in Computer Chemistry, 78, 169-180. [Google Scholor]
  24. Vukicevic, D., Li, Q., Sedlar, J., & Došlic, T. (2018). Lanzhou index. MATCH-Communications in Mathematical and in Computer Chemistry, 80, 863--876. [Google Scholor]
]]>