Applications of Markov spectra for the weighted partition network by the substitution rule
本站小编 Free考研考试/2022-01-02
Mei-Feng Dai,1, Ting-Ting Ju1, Yong-Bo Hou1, Fang Huang1, Dong-Lei Tang2, Wei-Yi Su31Institute of Applied System Analysis, Jiangsu University, Zhenjiang, 212013, China 2School of Mathematics and Statistics, Nanjing Audit University, Nanjing, 211815, China 3Department of Mathematics, Nanjing University, Nanjing, 210093, China
Abstract The weighted self-similar network is introduced in an iterative way. In order to understand the topological properties of the self-similar network, we have done a lot of research in this field. Firstly, according to the symmetry feature of the self-similar network, we deduce the recursive relationship of its eigenvalues at two successive generations of the transition-weighted matrix. Then, we obtain eigenvalues of the Laplacian matrix from these two successive generations. Finally, we calculate an accurate expression for the eigentime identity and Kirchhoff index from the spectrum of the Laplacian matrix. Keywords:weighted self-similar network;Laplacian matrix;eigentime identity;Kirchhoff index;substitution rule
PDF (541KB)MetadataMetricsRelated articlesExportEndNote|Ris|BibtexFavorite Cite this article Mei-Feng Dai, Ting-Ting Ju, Yong-Bo Hou, Fang Huang, Dong-Lei Tang, Wei-Yi Su. Applications of Markov spectra for the weighted partition network by the substitution rule. Communications in Theoretical Physics, 2020, 72(5): 055602- doi:10.1088/1572-9494/ab7ed3
1. Introduction
In recent years, the new research field of complex network theory has penetrated into many subjects and become a hot topic for scholars and scientists [1–7]. Because of the rapid development of computer data processing and computing power, scientists have more and more knowledge of complex networks. Through a lot of research, complex networks have been found to have many properties, such as small-world effects [8], average path length [9], clustering coefficient [10], betweenness [11] and so on. By using these properties, the behavior of network systems can be predicted on the basis of known network structure characteristics and the formation of rules. Recently there has been considerable interest in the study of complex networks, and the theories of complex networks have been developed in many fields, for example neural networks [12], the internet [13], traffic networks [14] and communication networks [15]. Understanding the theory behind complex networks is also conducive to deeper understanding of the real world.
A fractal structure is formed after numerous iterations. It is because of continuous iteration that the fractal graph has self-similarity. So, each part of a self-similar graph is similar to the whole graph. The Koch curve is an example that shows good self-similarity. More and more scholars have joined in the study of graphs that show self-similarity. Choongbum and Po-Shen [16] studied the self-similarity of graphs. Qi et al [17] made a self-similarity analysis of the eubacteria genome based on weighted graphs, and so on.
The calculation of Laplacian spectra of networks has attracted much attention, and Laplacian spectra are widely used [18–24]. The minimum eigenvalue of the Laplace matrix of a complex network corresponds to the diameter of that network [25]. Besides, the ratio of the maximum and minimum eigenvalues of the Laplace matrix of a complex network reflects the network’s capability for synchronization [26]. We can obtain expressions for the eigentime identity and Kirchhoff index, which can reflect topological properties and dynamical system properties of networks.
In recent years, complex networks have also attracted the attention of many scholars. Xia and Xi [27] studied the modeling and robust design of remanufacturing logistics networks based on the design of experiments. Dai et al [28] studied eigentime identities for weighted polymer networks. Shan et al [29] studied domination number and minimum dominating sets in a pseudofractal scale-free web and Sierpinski graph. In order to research the dynamic properties of different network models, one needs to calculate some index, such as the Laplacian spectrum [30], eigentime identity [31] and Kirchhoff index [32], etc.
The eigenvalues of the Laplacian matrix of a graph can reflect information from graph theory. We deduce these eigenvalues from two successive generations of a transition-weighted matrix so that we can obtain an accurate expression for the Kirchhoff index from the spectrum of the Laplacian matrix.
Kirchhoff discovered that Laplace matrices have some important applications for circuit networks. So, a Laplace matrix is also called a Kirchhoff matrix. In physics, the Kirchhoff index characterizes the average power consumed by a resistance network when current is arbitrarily injected into it. If the Kirchhoff index is small, the electrical energy consumed by the resistor network per unit time is small. In addition, the Kirchhoff index can be used as an indicator of network robustness.
In this paper, according to the symmetry feature of the self-similar network by the substitution rule, we can obtain its global properties by understanding its local properties. Firstly, by using some elementary matrix transformations, we deduce the recursive relationship of its eigenvalues at two successive generations of the Markov operator matrix. Then, according to the relationship between the Laplacian matrix and Markov operator matrix, we obtain eigenvalues of the Laplacian matrix. Finally, in order to understand the topological properties of the self-similar network, we make use of Vieta’s formulas to calculate accurate expressions for the eigentime identity and Kirchhoff index from the spectrum of the Laplacian matrix.
2. The weighted self-similar network by the substitution rule
In this section, we will construct a weighted self-similar network by the substitution rule in an iterative way [33].
Let r(0<r≤1) be a positive real number. Let Gt be the network generated after t (t≥0) steps.
For t=0, G0 is one edge with unit weight connecting the two nodes.
For t≥1, Gt is obtained from Gt−1 by performing the following operations.
For each existing edge having two endpoints and edge weight w in Gt−1, one can substitute a connected cluster on the right-hand side of the arrow in figure 1 for each edge in Gt−1 on the left-hand side, as described below.
Figure 1.
New window|Download| PPT slide Figure 1.Iterative construction method on every edge at two successive generations.
(i) The connected cluster is obtained by inserting two paths where one length is 2 and the other length is 3. The two endpoints of the paths are the same as the endpoints of the original edge, and all new nodes in the connected cluster are connected to each other.
(ii) Every edge weight in the connected cluster is scaled as shown in figure 1: the weights of the four edges linking the new and old nodes in Gt are equal to the weight of the original edge in Gt−1. The weights of the remaining three edges linking any two new nodes in Gt are scaled by a weight factor r. We show the substitution rule of Gt from t = 0 to t = 2 in figure 2.
Let the generalized adjacency matrix (weight matrix) of Gt be Wt, which is used to describe the connection among nodes of Gt as follows: Wt(i, j)=wt(i, j) if nodes i and j are adjacent in Gt, or Wt(i, j)=0 otherwise, where wij(t) is the weight of the edge between i and j. The strength matrix St of Gt is diagonal and given as St=diag(s1(t), s2(t),⋯,${s}_{{N}_{t}}(t)$), where si(t) is the strength of node i in Gt. Nt is the total number of nodes at generation t, and ${N}_{t}=\tfrac{{7}^{t}+3}{2}$. Et is the total number of edges at generation t, and $| {E}_{t}| ={7}^{t}$. The sum of all weights in Gt is Qt. The initial connected graph G0 has two nodes, labeled Node 1 and Node 2. The strength of Node 1 and Node 2 is 2 in G1.
Then, the Markov operator matrix Pt of Gt is defined as ${P}_{t}={S}_{t}^{-1}{W}_{t}$. The element at row i and column j of Pt is ${P}_{i,j}(t)={{s}_{i}}^{-1}{w}_{{ij}}(i\ne j)$. Then, we introduce Laplacian matrix Lt=It−Pt, where It is the identity matrix with the same order as Pt.
Let α represent the set of old nodes and β represent the set of new nodes in Gt. So, the Markov operator matrix Pt is divided into several blocks as follows:$\begin{eqnarray}{P}_{t}=\left(\begin{array}{cc}{P}_{\alpha ,\alpha } & {P}_{\alpha ,\beta }\\ {P}_{\beta ,\alpha } & {P}_{\beta ,\beta }\end{array}\right)\equiv \left(\begin{array}{cc}0 & {s}_{\alpha }^{-1}{w}_{\alpha ,\beta }\\ {s}_{\beta }^{-1}{w}_{\beta ,\alpha } & {s}_{\beta }^{-1}{w}_{\beta ,\beta }\end{array}\right),\end{eqnarray}$where Pα,β is the submatrix whose entries are the quotients of the weights and the strength of old nodes, while Pβ,α is the submatrix whose entries are the quotients of the weights and the strength of new nodes.
2.1. The recursive relationship of eigenvalues for Pt and Pt+1
In this section, we will deduce the recursive relationship of eigenvalues for the Markov operator matrices Pt and Pt+1 for self-similar network by the substitution rule.
In order to obtain the recursive relationship of eigenvalues of the Markov operator matrices Pt and Pt+1, we need to calculate$\begin{eqnarray}\begin{array}{c}\begin{array}{c}{\rm{\det }}({P}_{t+1}-{xI})={\rm{\det }}\left(\begin{array}{cc}-{xI} & {B}_{n}\\ {D}_{n} & {C}_{n}\end{array}\right),\end{array}\end{array}\end{eqnarray}$where ${B}_{n}=({P}_{\alpha ,{\beta }_{{e}_{1}}}$ ${P}_{\alpha ,{\beta }_{{e}_{2}}}\cdots {P}_{\alpha ,{\beta }_{{e}_{{7}^{t}}}})$, ${C}_{n}={\rm{diag}}({P}_{{\beta }_{{e}_{1}},{\beta }_{{e}_{1}}}$−${xI},{P}_{{\beta }_{{e}_{2}},{\beta }_{{e}_{2}}}-{xI},\ \cdots ,\ {P}_{{\beta }_{{e}_{{7}^{t}}},{\beta }_{{e}_{{7}^{t}}}}-{xI})$ and ${D}_{n}\,={\left({P}_{{\beta }_{{e}_{1}},\alpha },{P}_{{\beta }_{{e}_{2}},\alpha },\cdots ,{P}_{{\beta }_{{e}_{{7}^{t}}},\alpha }\right)}^{\top }$.
Then, the following result can be obtained (see appendix A for detail):$\begin{eqnarray}\,\begin{array}{c}\begin{array}{l}{\rm{d}}{\rm{e}}{\rm{t}}({P}_{t+1}-xI)=\,{\left(\displaystyle \frac{1}{2}\right)}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\displaystyle \prod _{i=1}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\left[(16{r}^{3}+32{r}^{2}+20r+4){x}^{4}\right.\\ \,\,-\,(8{r}^{3}+16{r}^{2}+10r+3+4{r}^{3}{\lambda }_{t}+4r{\lambda }_{t}+{\lambda }_{t}){x}^{2}\\ \,\,-\,\left.(4{r}^{3}+4{r}^{2}+2r+6{r}^{2}{\lambda }_{t}+4r{\lambda }_{t})x-2{r}^{2}{\lambda }_{t}\right]\\ \,\,\times \,\left[\left(x-\displaystyle \frac{r+r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)})\right.\right.\\ \,\,\times \,{\left.\left(x-\displaystyle \frac{r-r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)}\right)\left(x+\displaystyle \frac{r}{2r+1}\right)\right]}^{{\textstyle \tfrac{{7}^{t}-3}{2}}},\end{array}\end{array}\,\end{eqnarray}$where λt are eigenvalues of the Markov operator matrix Pt.
3. Eigentime identity and Kirchhoff index
In this section, we will calculate relevant invariants related to the structure of the weighted self-similar network Gt by the substitution rule. From above, we obtained the relationship of eigenvalues Pt and Pt+1. Then, we can obtain exact closed expressions for their eigentime identity and Kirchhoff index from two successive generations of the Laplacian matrix for a weighted self-similar network by the substitution rule.
3.1. Eigentime identity
The first-passage time hi,j is the expected time for a walker from node i to node j, which also can be thought of as the trapping time of a trapping problem. The stationary distribution Gt is $\pi ={({\pi }_{1},{\pi }_{1},\cdots ,{\pi }_{{N}_{t}})}^{\top }$, where ${\pi }_{i}=\tfrac{{s}_{i}(t)}{2{Q}_{t}}$, Qt is the sum of all weights in Gt, and it satisfies ${\sum }_{i=1}^{{N}_{t}}{\pi }_{i}=1$ and π⊤Pt=π⊤. The eigentime identity, denoted Ht, is defined as the expected time needed by a walker to travel from a node i to another target node j, chosen randomly from all nodes according to the steady-state distribution, that is,$\begin{eqnarray*}{H}_{t}=\sum _{j=1}^{{N}_{t}}{\pi }_{i}{h}_{i,j}(t).\end{eqnarray*}$It is useful to encode much information about trapping in Gt. Suppose that ${L}_{t}=\{{\sigma }_{1},{\sigma }_{2},\ \cdots \ {\sigma }_{{N}_{t}}\}$ is the set of all the eigenvalues of the matrix Lt at generation t. Referring to σi=1−λi, we have [28]$\begin{eqnarray}{H}_{t}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}.\end{eqnarray}$
From equation (5), we can divide Lt into two disjoint subsets ${L}_{t}^{(1)}$ and ${L}_{t}^{(2)}$.
The first portion of equation (5) means that each eigenvalue ${\lambda }_{i}^{(t)}\in {P}_{t}$ generates four eigenvalues ${\lambda }_{i,1}(t+1)\in {L}_{t}^{(1)}$, ${\lambda }_{i,2}(t+1)\in {L}_{t}^{(1)}$, ${\lambda }_{i,3}(t+1)\in {L}_{t}^{(1)}$ and ${\lambda }_{i,4}(t+1)\in {L}_{t}^{(1)}$. ${L}_{t}^{(1)}$ includes those eigenvalues λi,1(t+1), λi,2(t+1), λi,3(t+1) and λi,4(t+1) $\left(i=1,2,\ \cdots ,\ \tfrac{{7}^{t}+3}{2}\right)$. ${L}_{t}^{(2)}$ is the set of roots of the polynomial ${(Q(x))}^{{E}_{t}-{N}_{t}}$.
Now, we will calculate the eigentime identity Ht from the eigenvalues of Pt.$\begin{eqnarray}\begin{array}{l}{H}_{t}\,=\,\displaystyle \sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}\\ \,=\,\displaystyle \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}+\sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}\\ \,=\,\displaystyle \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\left(\displaystyle \frac{1}{1-{\lambda }_{i,1}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,2}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,3}(t)}\right.\\ \,+\,\left.\displaystyle \frac{1}{1-{\lambda }_{i,4}(t)}\right)+\displaystyle \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}.\end{array}\end{eqnarray}$
Then, Ht can be simplified as (see appendix B for detail)$\begin{eqnarray*}\begin{array}{lcl}{H}_{t} & = & \displaystyle \frac{1}{2}{\left(\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}\right)}^{t}\\ & & +\,\displaystyle \frac{7\left({\textstyle \tfrac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}}+{\textstyle \tfrac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}}\right)}{7-{\textstyle \tfrac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}}}\\ & & \times \,\left({7}^{t}-{\left(\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}\right)}^{t}\right)\\ & & +\,\displaystyle \frac{{\textstyle \tfrac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}}-{\textstyle \tfrac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}}}{1-{\textstyle \tfrac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}}}\\ & & \times \,\left(1-{\left(\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}\right)}^{t}\right).\end{array}\end{eqnarray*}$
The function C(r) is continuous on the interval (0, 1], ${\mathrm{lim}}_{r\to {0}^{+}}C(r)=8$ and $C(1)=5\tfrac{21}{31}$. By the intermediate value theorem, there exists a number r0∈(0, 1] such that C(r0)=7. From figure 3, we know that if and only if r0∈(0, 1] such that $C({r}_{0})=7$.
For very large networks (i.e. ${N}_{t}\to \infty $), using equation (8), the leading term of Ht obeys$\begin{eqnarray}{H}_{t}\sim \left\{\begin{array}{ll}{N}_{t}^{{\mathrm{log}}_{7}C(r)}, & {\rm{if}}\ 0\lt r\lt {r}_{0},\ {\rm{or}}\ {r}_{0}\lt r\leqslant 1,\\ {N}_{t}, & {\rm{if}}\ r={r}_{0}.\end{array}\right.\end{eqnarray}$
3.2. Kirchhoff index
Each edge in the complex network is replaced by a valid resistor, which reflects its weight. We can obtain the corresponding electrical network H* associated with Gt. The resistance distance rij between vertices i and j of Gt is equal tothe effective resistance between the two corresponding vertices of H∗. Compared with the traditional network robustness, the Kirchhoff index can better reflect the stability and connectivity of the network. The Kirchhoff index is the sum of all resistance distance as follows:$\begin{eqnarray*}{Kf}^{\ast }(H)=\displaystyle \sum _{i\lt j}{s}_{i}{s}_{j}{r}_{ij},\,i,j=1,2,\,\cdots ,\,{N}_{t}.\end{eqnarray*}$
We will calculate the Kirchhoff index from eigenvalues of Pt. The Kirchhoff index Kf*(Gt) can be expressed in terms of the spectrum of the Laplacian matrix of Gt as follows:$\begin{eqnarray}{{Kf}}^{\ast }({G}_{t})=2{E}_{t}\displaystyle \sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=2{E}_{t}\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}=2{E}_{t}{H}_{t}.\end{eqnarray}$
For very large networks (i.e. ${N}_{t}\to \infty $), using equations (8) and (9) the leading term of Ht obeys$\begin{eqnarray}\begin{array}{c}{Kf}^{\ast }({G}_{t})\sim \left\{\begin{array}{ll}{N}_{t}^{1+{{\rm{l}}{\rm{o}}{\rm{g}}}_{7}C(r)}, & {\rm{i}}{\rm{f}}\,0\lt r\lt {r}_{0},{\rm{o}}{\rm{r}}\,{r}_{0}\lt r\leqslant 1.\\ {N}_{t}^{2}, & {\rm{i}}{\rm{f}}\,r={r}_{0}.\end{array}\right.\end{array}\end{eqnarray}$
Equations (9) and (11) give the following results: in the limit of large t, if r=r0 then the eigentime identity grows linearly with the network order while the Kirchhoff index grows superlinearly. If 0<r<r0, or r0<r≤1, then the eigentime identity and Kirchhoff index grow as a power-law function of the network order with the exponent represented by ${\mathrm{log}}_{7}C(r)$ and $1+{\mathrm{log}}_{7}C(r)$, respectively.
4. Conclusions
In conclusion, first, we can calculate eigenvalues of ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}$. Then, based on symmetry of the weighted self-similar network, we deduce the iterative relationship of its eigenvalues and multiplicities at two successive generations of the transition matrix for the weighted self-similar network. Finally, we deduce accurate expressions for the eigentime identity and Kirchhoff index.
In order to obtain the topologies and dynamical properties, we need to calculate eigenvalues of the Laplacian matrix. However, it is very difficult to obtain the Laplacian spectrum of different models. There still exist a lot of problems that need solving. There are still many unknown things in this field waiting for us to explore.
Acknowledgments
The authors express their gratitude to the referee for valuable comments. Research is supported by the Natural Science Foundation of China (Nos. 11 671 172).
Appendix A
The problem of determining det(Pt+1−xI) is reduced to finding ${\sum }_{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }$. Firstly, given ${w}_{\alpha ,{\beta }_{{e}_{i}}}$, there exists a positive real number w such that$\begin{eqnarray*}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}=\left(\begin{array}{ccc}1 & 1 & 0\\ 1 & 0 & 1\end{array}\right).\end{eqnarray*}$Based on the symmetric property of the self-similar network by the substitution rule (see figure 2) and equations (2) and (3), we have $\begin{eqnarray}\begin{array}{l}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })\\ \quad \ =\ \displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{\alpha ,{\beta }_{{e}_{i}}}^{\top })\\ \quad \ =\ \displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{\alpha ,{\beta }_{{e}_{i}}}^{\top }.\end{array}\end{eqnarray}$Here ${w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}$ is symmetric and thus ${\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}$ is symmetric. So we can obtain $\tfrac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })$ = $\tfrac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{\alpha ,{\beta }_{{e}_{i}}}^{\top }$ is symmetric.
Figure 2.
New window|Download| PPT slide Figure 2.Iterative construction method for the weighted self-similar network Gt by the substitution rule from t=0 to t=2.
Suppose ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}$ has a characteristic polynomial Q(x), which can be written Q(x)=det $({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})$. So, we can obtain that$\begin{eqnarray*}\begin{array}{l}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}\,=\,\displaystyle \frac{{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{\ast }}{{\rm{d}}{\rm{e}}{\rm{t}}\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}\\ \,=\,\displaystyle \frac{{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{\ast }}{\left.Q(x\right)},\end{array}\end{eqnarray*}$where ${({P}_{\beta ,\beta }-{xI})}^{* }$ is the adjoint matrix of ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}$. Every element of ${({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{* }$ is a polynomial. Because the matrix is symmetric, we have$\begin{eqnarray*}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })=\left(\begin{array}{cc}{f}_{1}(x) & g(x)\\ g(x) & {f}_{2}(x)\end{array}\right).\end{eqnarray*}$
There is a permutation $\sigma =\left(\begin{array}{cc}0 & 1\\ 1 & 0\end{array}\right)$ that satisfies$\begin{eqnarray*}\begin{array}{l}\sigma \left(\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}^{\top }\right)\sigma \\ \quad \ =\ \displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}^{\top }.\end{array}\end{eqnarray*}$
Thus, the polynomials f1(x) and f2(x) are equal, so we deduce that$\begin{eqnarray}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })=\left(\begin{array}{cc}f(x) & g(x)\\ g(x) & f(x)\end{array}\right).\end{eqnarray}$
Secondly, we will calculate the characteristic polynomials Q(x), f(x) and g(x). Because of the symmetric property of the self-similar network by the substitution rule, we have$\begin{eqnarray*}\begin{array}{rcl}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w} & = & \left(\begin{array}{ccc}1 & 1 & 0\\ 1 & 0 & 1\end{array}\right),\ {s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha }=\left(\begin{array}{cc}\displaystyle \frac{1}{2r+2} & \displaystyle \frac{1}{2r+1}\\ \displaystyle \frac{1}{2r+1} & 0\\ 0 & \displaystyle \frac{1}{2r+1}\end{array}\right),\\ {P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}} & = & \left(\begin{array}{ccc}0 & \displaystyle \frac{r}{2r+2} & \displaystyle \frac{r}{2r+2}\\ \displaystyle \frac{r}{2r+1} & 0 & \displaystyle \frac{r}{2r+1}\\ \displaystyle \frac{r}{2r+1} & \displaystyle \frac{r}{2r+1} & 0\end{array}\right).\end{array}\end{eqnarray*}$
So, we can obtain$\begin{eqnarray}\,\begin{array}{c}\begin{array}{l}Q(x)={\rm{d}}{\rm{e}}{\rm{t}}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-xI\right)}^{-1}\\ \,\,=\,{\rm{d}}{\rm{e}}{\rm{t}}{\left(\begin{array}{ccc}-x & \displaystyle \frac{r}{2r+2} & \displaystyle \frac{r}{2r+2}\\ \displaystyle \frac{r}{2r+1} & -x & \displaystyle \frac{r}{2r+1}\\ \displaystyle \frac{r}{2r+1} & \displaystyle \frac{r}{2r+1} & -x\end{array}\right)}^{-1}\\ \,\,=\,2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{rx}^{3}+{x}^{3}).\end{array}\end{array}\,\end{eqnarray}$